Alert button
Picture for Minh-Thang Luong

Minh-Thang Luong

Alert button

MTet: Multi-domain Translation for English and Vietnamese

Oct 19, 2022
Chinh Ngo, Trieu H. Trinh, Long Phan, Hieu Tran, Tai Dang, Hieu Nguyen, Minh Nguyen, Minh-Thang Luong

Figure 1 for MTet: Multi-domain Translation for English and Vietnamese
Figure 2 for MTet: Multi-domain Translation for English and Vietnamese
Figure 3 for MTet: Multi-domain Translation for English and Vietnamese
Figure 4 for MTet: Multi-domain Translation for English and Vietnamese
Viaarxiv icon

Combined Scaling for Zero-shot Transfer Learning

Nov 19, 2021
Hieu Pham, Zihang Dai, Golnaz Ghiasi, Hanxiao Liu, Adams Wei Yu, Minh-Thang Luong, Mingxing Tan, Quoc V. Le

Figure 1 for Combined Scaling for Zero-shot Transfer Learning
Figure 2 for Combined Scaling for Zero-shot Transfer Learning
Figure 3 for Combined Scaling for Zero-shot Transfer Learning
Figure 4 for Combined Scaling for Zero-shot Transfer Learning
Viaarxiv icon

Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference

Sep 24, 2021
Sneha Kudugunta, Yanping Huang, Ankur Bapna, Maxim Krikun, Dmitry Lepikhin, Minh-Thang Luong, Orhan Firat

Figure 1 for Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference
Figure 2 for Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference
Figure 3 for Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference
Figure 4 for Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference
Viaarxiv icon

STraTA: Self-Training with Task Augmentation for Better Few-shot Learning

Sep 13, 2021
Tu Vu, Minh-Thang Luong, Quoc V. Le, Grady Simon, Mohit Iyyer

Figure 1 for STraTA: Self-Training with Task Augmentation for Better Few-shot Learning
Figure 2 for STraTA: Self-Training with Task Augmentation for Better Few-shot Learning
Figure 3 for STraTA: Self-Training with Task Augmentation for Better Few-shot Learning
Figure 4 for STraTA: Self-Training with Task Augmentation for Better Few-shot Learning
Viaarxiv icon

Pre-Training Transformers as Energy-Based Cloze Models

Dec 15, 2020
Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning

Figure 1 for Pre-Training Transformers as Energy-Based Cloze Models
Figure 2 for Pre-Training Transformers as Energy-Based Cloze Models
Figure 3 for Pre-Training Transformers as Energy-Based Cloze Models
Figure 4 for Pre-Training Transformers as Energy-Based Cloze Models
Viaarxiv icon

Towards Domain-Agnostic Contrastive Learning

Nov 09, 2020
Vikas Verma, Minh-Thang Luong, Kenji Kawaguchi, Hieu Pham, Quoc V. Le

Figure 1 for Towards Domain-Agnostic Contrastive Learning
Figure 2 for Towards Domain-Agnostic Contrastive Learning
Figure 3 for Towards Domain-Agnostic Contrastive Learning
Figure 4 for Towards Domain-Agnostic Contrastive Learning
Viaarxiv icon

ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators

Mar 23, 2020
Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning

Viaarxiv icon

Towards a Human-like Open-Domain Chatbot

Feb 27, 2020
Daniel Adiwardana, Minh-Thang Luong, David R. So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, Quoc V. Le

Figure 1 for Towards a Human-like Open-Domain Chatbot
Figure 2 for Towards a Human-like Open-Domain Chatbot
Figure 3 for Towards a Human-like Open-Domain Chatbot
Figure 4 for Towards a Human-like Open-Domain Chatbot
Viaarxiv icon