Alert button
Picture for Hany Hassan Awadalla

Hany Hassan Awadalla

Alert button

Microsoft Redmond

Z-Code++: A Pre-trained Language Model Optimized for Abstractive Summarization

Add code
Bookmark button
Alert button
Aug 21, 2022
Pengcheng He, Baolin Peng, Liyang Lu, Song Wang, Jie Mei, Yang Liu, Ruochen Xu, Hany Hassan Awadalla, Yu Shi, Chenguang Zhu, Wayne Xiong, Michael Zeng, Jianfeng Gao, Xuedong Huang

Figure 1 for Z-Code++: A Pre-trained Language Model Optimized for Abstractive Summarization
Figure 2 for Z-Code++: A Pre-trained Language Model Optimized for Abstractive Summarization
Figure 3 for Z-Code++: A Pre-trained Language Model Optimized for Abstractive Summarization
Figure 4 for Z-Code++: A Pre-trained Language Model Optimized for Abstractive Summarization
Viaarxiv icon

Language Tokens: A Frustratingly Simple Approach Improves Zero-Shot Performance of Multilingual Translation

Add code
Bookmark button
Alert button
Aug 11, 2022
Muhammad ElNokrashy, Amr Hendy, Mohamed Maher, Mohamed Afify, Hany Hassan Awadalla

Figure 1 for Language Tokens: A Frustratingly Simple Approach Improves Zero-Shot Performance of Multilingual Translation
Figure 2 for Language Tokens: A Frustratingly Simple Approach Improves Zero-Shot Performance of Multilingual Translation
Figure 3 for Language Tokens: A Frustratingly Simple Approach Improves Zero-Shot Performance of Multilingual Translation
Figure 4 for Language Tokens: A Frustratingly Simple Approach Improves Zero-Shot Performance of Multilingual Translation
Viaarxiv icon

Building Multilingual Machine Translation Systems That Serve Arbitrary X-Y Translations

Add code
Bookmark button
Alert button
Jun 30, 2022
Akiko Eriguchi, Shufang Xie, Tao Qin, Hany Hassan Awadalla

Figure 1 for Building Multilingual Machine Translation Systems That Serve Arbitrary X-Y Translations
Figure 2 for Building Multilingual Machine Translation Systems That Serve Arbitrary X-Y Translations
Figure 3 for Building Multilingual Machine Translation Systems That Serve Arbitrary X-Y Translations
Figure 4 for Building Multilingual Machine Translation Systems That Serve Arbitrary X-Y Translations
Viaarxiv icon

Gating Dropout: Communication-efficient Regularization for Sparsely Activated Transformers

Add code
Bookmark button
Alert button
May 28, 2022
Rui Liu, Young Jin Kim, Alexandre Muzio, Barzan Mozafari, Hany Hassan Awadalla

Figure 1 for Gating Dropout: Communication-efficient Regularization for Sparsely Activated Transformers
Figure 2 for Gating Dropout: Communication-efficient Regularization for Sparsely Activated Transformers
Figure 3 for Gating Dropout: Communication-efficient Regularization for Sparsely Activated Transformers
Figure 4 for Gating Dropout: Communication-efficient Regularization for Sparsely Activated Transformers
Viaarxiv icon

Ensembling of Distilled Models from Multi-task Teachers for Constrained Resource Language Pairs

Add code
Bookmark button
Alert button
Nov 26, 2021
Amr Hendy, Esraa A. Gad, Mohamed Abdelghaffar, Jailan S. ElMosalami, Mohamed Afify, Ahmed Y. Tawfik, Hany Hassan Awadalla

Figure 1 for Ensembling of Distilled Models from Multi-task Teachers for Constrained Resource Language Pairs
Figure 2 for Ensembling of Distilled Models from Multi-task Teachers for Constrained Resource Language Pairs
Figure 3 for Ensembling of Distilled Models from Multi-task Teachers for Constrained Resource Language Pairs
Figure 4 for Ensembling of Distilled Models from Multi-task Teachers for Constrained Resource Language Pairs
Viaarxiv icon

Multilingual Machine Translation Systems from Microsoft for WMT21 Shared Task

Add code
Bookmark button
Alert button
Nov 03, 2021
Jian Yang, Shuming Ma, Haoyang Huang, Dongdong Zhang, Li Dong, Shaohan Huang, Alexandre Muzio, Saksham Singhal, Hany Hassan Awadalla, Xia Song, Furu Wei

Figure 1 for Multilingual Machine Translation Systems from Microsoft for WMT21 Shared Task
Figure 2 for Multilingual Machine Translation Systems from Microsoft for WMT21 Shared Task
Figure 3 for Multilingual Machine Translation Systems from Microsoft for WMT21 Shared Task
Figure 4 for Multilingual Machine Translation Systems from Microsoft for WMT21 Shared Task
Viaarxiv icon

Scalable and Efficient MoE Training for Multitask Multilingual Models

Add code
Bookmark button
Alert button
Sep 22, 2021
Young Jin Kim, Ammar Ahmad Awan, Alexandre Muzio, Andres Felipe Cruz Salinas, Liyang Lu, Amr Hendy, Samyam Rajbhandari, Yuxiong He, Hany Hassan Awadalla

Figure 1 for Scalable and Efficient MoE Training for Multitask Multilingual Models
Figure 2 for Scalable and Efficient MoE Training for Multitask Multilingual Models
Figure 3 for Scalable and Efficient MoE Training for Multitask Multilingual Models
Figure 4 for Scalable and Efficient MoE Training for Multitask Multilingual Models
Viaarxiv icon

DeltaLM: Encoder-Decoder Pre-training for Language Generation and Translation by Augmenting Pretrained Multilingual Encoders

Add code
Bookmark button
Alert button
Jun 25, 2021
Shuming Ma, Li Dong, Shaohan Huang, Dongdong Zhang, Alexandre Muzio, Saksham Singhal, Hany Hassan Awadalla, Xia Song, Furu Wei

Figure 1 for DeltaLM: Encoder-Decoder Pre-training for Language Generation and Translation by Augmenting Pretrained Multilingual Encoders
Figure 2 for DeltaLM: Encoder-Decoder Pre-training for Language Generation and Translation by Augmenting Pretrained Multilingual Encoders
Figure 3 for DeltaLM: Encoder-Decoder Pre-training for Language Generation and Translation by Augmenting Pretrained Multilingual Encoders
Figure 4 for DeltaLM: Encoder-Decoder Pre-training for Language Generation and Translation by Augmenting Pretrained Multilingual Encoders
Viaarxiv icon

XLM-T: Scaling up Multilingual Machine Translation with Pretrained Cross-lingual Transformer Encoders

Add code
Bookmark button
Alert button
Dec 31, 2020
Shuming Ma, Jian Yang, Haoyang Huang, Zewen Chi, Li Dong, Dongdong Zhang, Hany Hassan Awadalla, Alexandre Muzio, Akiko Eriguchi, Saksham Singhal, Xia Song, Arul Menezes, Furu Wei

Figure 1 for XLM-T: Scaling up Multilingual Machine Translation with Pretrained Cross-lingual Transformer Encoders
Figure 2 for XLM-T: Scaling up Multilingual Machine Translation with Pretrained Cross-lingual Transformer Encoders
Figure 3 for XLM-T: Scaling up Multilingual Machine Translation with Pretrained Cross-lingual Transformer Encoders
Figure 4 for XLM-T: Scaling up Multilingual Machine Translation with Pretrained Cross-lingual Transformer Encoders
Viaarxiv icon

Score Combination for Improved Parallel Corpus Filtering for Low Resource Conditions

Add code
Bookmark button
Alert button
Nov 16, 2020
Muhammad N. ElNokrashy, Amr Hendy, Mohamed Abdelghaffar, Mohamed Afify, Ahmed Tawfik, Hany Hassan Awadalla

Figure 1 for Score Combination for Improved Parallel Corpus Filtering for Low Resource Conditions
Figure 2 for Score Combination for Improved Parallel Corpus Filtering for Low Resource Conditions
Figure 3 for Score Combination for Improved Parallel Corpus Filtering for Low Resource Conditions
Figure 4 for Score Combination for Improved Parallel Corpus Filtering for Low Resource Conditions
Viaarxiv icon