Alert button
Picture for Peyman Passban

Peyman Passban

Alert button

Huawei Noah's Ark Lab

What is Lost in Knowledge Distillation?

Add code
Bookmark button
Alert button
Nov 07, 2023
Manas Mohanty, Tanya Roosta, Peyman Passban

Viaarxiv icon

Training Mixed-Domain Translation Models via Federated Learning

Add code
Bookmark button
Alert button
May 03, 2022
Peyman Passban, Tanya Roosta, Rahul Gupta, Ankit Chadha, Clement Chung

Figure 1 for Training Mixed-Domain Translation Models via Federated Learning
Figure 2 for Training Mixed-Domain Translation Models via Federated Learning
Figure 3 for Training Mixed-Domain Translation Models via Federated Learning
Figure 4 for Training Mixed-Domain Translation Models via Federated Learning
Viaarxiv icon

Dynamic Position Encoding for Transformers

Add code
Bookmark button
Alert button
Apr 18, 2022
Joyce Zheng, Mehdi Rezagholizadeh, Peyman Passban

Figure 1 for Dynamic Position Encoding for Transformers
Figure 2 for Dynamic Position Encoding for Transformers
Figure 3 for Dynamic Position Encoding for Transformers
Figure 4 for Dynamic Position Encoding for Transformers
Viaarxiv icon

Communication-Efficient Federated Learning for Neural Machine Translation

Add code
Bookmark button
Alert button
Dec 12, 2021
Tanya Roosta, Peyman Passban, Ankit Chadha

Figure 1 for Communication-Efficient Federated Learning for Neural Machine Translation
Figure 2 for Communication-Efficient Federated Learning for Neural Machine Translation
Viaarxiv icon

Not Far Away, Not So Close: Sample Efficient Nearest Neighbour Data Augmentation via MiniMax

Add code
Bookmark button
Alert button
Jun 02, 2021
Ehsan Kamalloo, Mehdi Rezagholizadeh, Peyman Passban, Ali Ghodsi

Figure 1 for Not Far Away, Not So Close: Sample Efficient Nearest Neighbour Data Augmentation via MiniMax
Figure 2 for Not Far Away, Not So Close: Sample Efficient Nearest Neighbour Data Augmentation via MiniMax
Figure 3 for Not Far Away, Not So Close: Sample Efficient Nearest Neighbour Data Augmentation via MiniMax
Figure 4 for Not Far Away, Not So Close: Sample Efficient Nearest Neighbour Data Augmentation via MiniMax
Viaarxiv icon

Robust Embeddings Via Distributions

Add code
Bookmark button
Alert button
Apr 17, 2021
Kira A. Selby, Yinong Wang, Ruizhe Wang, Peyman Passban, Ahmad Rashid, Mehdi Rezagholizadeh, Pascal Poupart

Figure 1 for Robust Embeddings Via Distributions
Figure 2 for Robust Embeddings Via Distributions
Figure 3 for Robust Embeddings Via Distributions
Figure 4 for Robust Embeddings Via Distributions
Viaarxiv icon

Revisiting Robust Neural Machine Translation: A Transformer Case Study

Add code
Bookmark button
Alert button
Dec 31, 2020
Peyman Passban, Puneeth S. M. Saladi, Qun Liu

Figure 1 for Revisiting Robust Neural Machine Translation: A Transformer Case Study
Figure 2 for Revisiting Robust Neural Machine Translation: A Transformer Case Study
Figure 3 for Revisiting Robust Neural Machine Translation: A Transformer Case Study
Figure 4 for Revisiting Robust Neural Machine Translation: A Transformer Case Study
Viaarxiv icon

ALP-KD: Attention-Based Layer Projection for Knowledge Distillation

Add code
Bookmark button
Alert button
Dec 27, 2020
Peyman Passban, Yimeng Wu, Mehdi Rezagholizadeh, Qun Liu

Figure 1 for ALP-KD: Attention-Based Layer Projection for Knowledge Distillation
Figure 2 for ALP-KD: Attention-Based Layer Projection for Knowledge Distillation
Figure 3 for ALP-KD: Attention-Based Layer Projection for Knowledge Distillation
Figure 4 for ALP-KD: Attention-Based Layer Projection for Knowledge Distillation
Viaarxiv icon

Why Skip If You Can Combine: A Simple Knowledge Distillation Technique for Intermediate Layers

Add code
Bookmark button
Alert button
Oct 06, 2020
Yimeng Wu, Peyman Passban, Mehdi Rezagholizade, Qun Liu

Figure 1 for Why Skip If You Can Combine: A Simple Knowledge Distillation Technique for Intermediate Layers
Figure 2 for Why Skip If You Can Combine: A Simple Knowledge Distillation Technique for Intermediate Layers
Figure 3 for Why Skip If You Can Combine: A Simple Knowledge Distillation Technique for Intermediate Layers
Figure 4 for Why Skip If You Can Combine: A Simple Knowledge Distillation Technique for Intermediate Layers
Viaarxiv icon