Alert button
Picture for Ankur Bapna

Ankur Bapna

Alert button

Leveraging unsupervised and weakly-supervised data to improve direct speech-to-speech translation

Add code
Bookmark button
Alert button
Mar 24, 2022
Ye Jia, Yifan Ding, Ankur Bapna, Colin Cherry, Yu Zhang, Alexis Conneau, Nobuyuki Morioka

Figure 1 for Leveraging unsupervised and weakly-supervised data to improve direct speech-to-speech translation
Figure 2 for Leveraging unsupervised and weakly-supervised data to improve direct speech-to-speech translation
Figure 3 for Leveraging unsupervised and weakly-supervised data to improve direct speech-to-speech translation
Figure 4 for Leveraging unsupervised and weakly-supervised data to improve direct speech-to-speech translation
Viaarxiv icon

Multilingual Mix: Example Interpolation Improves Multilingual Neural Machine Translation

Add code
Bookmark button
Alert button
Mar 15, 2022
Yong Cheng, Ankur Bapna, Orhan Firat, Yuan Cao, Pidong Wang, Wolfgang Macherey

Figure 1 for Multilingual Mix: Example Interpolation Improves Multilingual Neural Machine Translation
Figure 2 for Multilingual Mix: Example Interpolation Improves Multilingual Neural Machine Translation
Figure 3 for Multilingual Mix: Example Interpolation Improves Multilingual Neural Machine Translation
Figure 4 for Multilingual Mix: Example Interpolation Improves Multilingual Neural Machine Translation
Viaarxiv icon

Examining Scaling and Transfer of Language Model Architectures for Machine Translation

Add code
Bookmark button
Alert button
Feb 16, 2022
Biao Zhang, Behrooz Ghorbani, Ankur Bapna, Yong Cheng, Xavier Garcia, Jonathan Shen, Orhan Firat

Figure 1 for Examining Scaling and Transfer of Language Model Architectures for Machine Translation
Figure 2 for Examining Scaling and Transfer of Language Model Architectures for Machine Translation
Figure 3 for Examining Scaling and Transfer of Language Model Architectures for Machine Translation
Figure 4 for Examining Scaling and Transfer of Language Model Architectures for Machine Translation
Viaarxiv icon

mSLAM: Massively multilingual joint pre-training for speech and text

Add code
Bookmark button
Alert button
Feb 03, 2022
Ankur Bapna, Colin Cherry, Yu Zhang, Ye Jia, Melvin Johnson, Yong Cheng, Simran Khanuja, Jason Riesa, Alexis Conneau

Figure 1 for mSLAM: Massively multilingual joint pre-training for speech and text
Figure 2 for mSLAM: Massively multilingual joint pre-training for speech and text
Figure 3 for mSLAM: Massively multilingual joint pre-training for speech and text
Figure 4 for mSLAM: Massively multilingual joint pre-training for speech and text
Viaarxiv icon

Towards the Next 1000 Languages in Multilingual Machine Translation: Exploring the Synergy Between Supervised and Self-Supervised Learning

Add code
Bookmark button
Alert button
Jan 13, 2022
Aditya Siddhant, Ankur Bapna, Orhan Firat, Yuan Cao, Mia Xu Chen, Isaac Caswell, Xavier Garcia

Figure 1 for Towards the Next 1000 Languages in Multilingual Machine Translation: Exploring the Synergy Between Supervised and Self-Supervised Learning
Figure 2 for Towards the Next 1000 Languages in Multilingual Machine Translation: Exploring the Synergy Between Supervised and Self-Supervised Learning
Figure 3 for Towards the Next 1000 Languages in Multilingual Machine Translation: Exploring the Synergy Between Supervised and Self-Supervised Learning
Figure 4 for Towards the Next 1000 Languages in Multilingual Machine Translation: Exploring the Synergy Between Supervised and Self-Supervised Learning
Viaarxiv icon

Joint Unsupervised and Supervised Training for Multilingual ASR

Add code
Bookmark button
Alert button
Nov 15, 2021
Junwen Bai, Bo Li, Yu Zhang, Ankur Bapna, Nikhil Siddhartha, Khe Chai Sim, Tara N. Sainath

Figure 1 for Joint Unsupervised and Supervised Training for Multilingual ASR
Figure 2 for Joint Unsupervised and Supervised Training for Multilingual ASR
Figure 3 for Joint Unsupervised and Supervised Training for Multilingual ASR
Figure 4 for Joint Unsupervised and Supervised Training for Multilingual ASR
Viaarxiv icon

SLAM: A Unified Encoder for Speech and Language Modeling via Speech-Text Joint Pre-Training

Add code
Bookmark button
Alert button
Oct 20, 2021
Ankur Bapna, Yu-an Chung, Nan Wu, Anmol Gulati, Ye Jia, Jonathan H. Clark, Melvin Johnson, Jason Riesa, Alexis Conneau, Yu Zhang

Figure 1 for SLAM: A Unified Encoder for Speech and Language Modeling via Speech-Text Joint Pre-Training
Figure 2 for SLAM: A Unified Encoder for Speech and Language Modeling via Speech-Text Joint Pre-Training
Figure 3 for SLAM: A Unified Encoder for Speech and Language Modeling via Speech-Text Joint Pre-Training
Figure 4 for SLAM: A Unified Encoder for Speech and Language Modeling via Speech-Text Joint Pre-Training
Viaarxiv icon

Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference

Add code
Bookmark button
Alert button
Sep 24, 2021
Sneha Kudugunta, Yanping Huang, Ankur Bapna, Maxim Krikun, Dmitry Lepikhin, Minh-Thang Luong, Orhan Firat

Figure 1 for Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference
Figure 2 for Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference
Figure 3 for Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference
Figure 4 for Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference
Viaarxiv icon