Alert button
Picture for Amr Hendy

Amr Hendy

Alert button

Microsoft ATL Cairo

How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation

Add code
Bookmark button
Alert button
Feb 18, 2023
Amr Hendy, Mohamed Abdelrehim, Amr Sharaf, Vikas Raunak, Mohamed Gabr, Hitokazu Matsushita, Young Jin Kim, Mohamed Afify, Hany Hassan Awadalla

Figure 1 for How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation
Figure 2 for How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation
Figure 3 for How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation
Figure 4 for How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation
Viaarxiv icon

Domain Specific Sub-network for Multi-Domain Neural Machine Translation

Add code
Bookmark button
Alert button
Oct 18, 2022
Amr Hendy, Mohamed Abdelghaffar, Mohamed Afify, Ahmed Y. Tawfik

Figure 1 for Domain Specific Sub-network for Multi-Domain Neural Machine Translation
Figure 2 for Domain Specific Sub-network for Multi-Domain Neural Machine Translation
Figure 3 for Domain Specific Sub-network for Multi-Domain Neural Machine Translation
Figure 4 for Domain Specific Sub-network for Multi-Domain Neural Machine Translation
Viaarxiv icon

Language Tokens: A Frustratingly Simple Approach Improves Zero-Shot Performance of Multilingual Translation

Add code
Bookmark button
Alert button
Aug 11, 2022
Muhammad ElNokrashy, Amr Hendy, Mohamed Maher, Mohamed Afify, Hany Hassan Awadalla

Figure 1 for Language Tokens: A Frustratingly Simple Approach Improves Zero-Shot Performance of Multilingual Translation
Figure 2 for Language Tokens: A Frustratingly Simple Approach Improves Zero-Shot Performance of Multilingual Translation
Figure 3 for Language Tokens: A Frustratingly Simple Approach Improves Zero-Shot Performance of Multilingual Translation
Figure 4 for Language Tokens: A Frustratingly Simple Approach Improves Zero-Shot Performance of Multilingual Translation
Viaarxiv icon

Ensembling of Distilled Models from Multi-task Teachers for Constrained Resource Language Pairs

Add code
Bookmark button
Alert button
Nov 26, 2021
Amr Hendy, Esraa A. Gad, Mohamed Abdelghaffar, Jailan S. ElMosalami, Mohamed Afify, Ahmed Y. Tawfik, Hany Hassan Awadalla

Figure 1 for Ensembling of Distilled Models from Multi-task Teachers for Constrained Resource Language Pairs
Figure 2 for Ensembling of Distilled Models from Multi-task Teachers for Constrained Resource Language Pairs
Figure 3 for Ensembling of Distilled Models from Multi-task Teachers for Constrained Resource Language Pairs
Figure 4 for Ensembling of Distilled Models from Multi-task Teachers for Constrained Resource Language Pairs
Viaarxiv icon

Scalable and Efficient MoE Training for Multitask Multilingual Models

Add code
Bookmark button
Alert button
Sep 22, 2021
Young Jin Kim, Ammar Ahmad Awan, Alexandre Muzio, Andres Felipe Cruz Salinas, Liyang Lu, Amr Hendy, Samyam Rajbhandari, Yuxiong He, Hany Hassan Awadalla

Figure 1 for Scalable and Efficient MoE Training for Multitask Multilingual Models
Figure 2 for Scalable and Efficient MoE Training for Multitask Multilingual Models
Figure 3 for Scalable and Efficient MoE Training for Multitask Multilingual Models
Figure 4 for Scalable and Efficient MoE Training for Multitask Multilingual Models
Viaarxiv icon

Score Combination for Improved Parallel Corpus Filtering for Low Resource Conditions

Add code
Bookmark button
Alert button
Nov 16, 2020
Muhammad N. ElNokrashy, Amr Hendy, Mohamed Abdelghaffar, Mohamed Afify, Ahmed Tawfik, Hany Hassan Awadalla

Figure 1 for Score Combination for Improved Parallel Corpus Filtering for Low Resource Conditions
Figure 2 for Score Combination for Improved Parallel Corpus Filtering for Low Resource Conditions
Figure 3 for Score Combination for Improved Parallel Corpus Filtering for Low Resource Conditions
Figure 4 for Score Combination for Improved Parallel Corpus Filtering for Low Resource Conditions
Viaarxiv icon