Alert button
Picture for Kevin Duh

Kevin Duh

Alert button

Very Deep Transformers for Neural Machine Translation

Add code
Bookmark button
Alert button
Aug 18, 2020
Xiaodong Liu, Kevin Duh, Liyuan Liu, Jianfeng Gao

Figure 1 for Very Deep Transformers for Neural Machine Translation
Figure 2 for Very Deep Transformers for Neural Machine Translation
Figure 3 for Very Deep Transformers for Neural Machine Translation
Figure 4 for Very Deep Transformers for Neural Machine Translation
Viaarxiv icon

Modeling Document Interactions for Learning to Rank with Regularized Self-Attention

Add code
Bookmark button
Alert button
May 08, 2020
Shuo Sun, Kevin Duh

Figure 1 for Modeling Document Interactions for Learning to Rank with Regularized Self-Attention
Figure 2 for Modeling Document Interactions for Learning to Rank with Regularized Self-Attention
Figure 3 for Modeling Document Interactions for Learning to Rank with Regularized Self-Attention
Figure 4 for Modeling Document Interactions for Learning to Rank with Regularized Self-Attention
Viaarxiv icon

ESPnet-ST: All-in-One Speech Translation Toolkit

Add code
Bookmark button
Alert button
Apr 21, 2020
Hirofumi Inaguma, Shun Kiyono, Kevin Duh, Shigeki Karita, Nelson Enrique Yalta Soplin, Tomoki Hayashi, Shinji Watanabe

Figure 1 for ESPnet-ST: All-in-One Speech Translation Toolkit
Figure 2 for ESPnet-ST: All-in-One Speech Translation Toolkit
Figure 3 for ESPnet-ST: All-in-One Speech Translation Toolkit
Figure 4 for ESPnet-ST: All-in-One Speech Translation Toolkit
Viaarxiv icon

When Does Unsupervised Machine Translation Work?

Add code
Bookmark button
Alert button
Apr 14, 2020
Kelly Marchisio, Kevin Duh, Philipp Koehn

Figure 1 for When Does Unsupervised Machine Translation Work?
Figure 2 for When Does Unsupervised Machine Translation Work?
Figure 3 for When Does Unsupervised Machine Translation Work?
Figure 4 for When Does Unsupervised Machine Translation Work?
Viaarxiv icon

Distill, Adapt, Distill: Training Small, In-Domain Models for Neural Machine Translation

Add code
Bookmark button
Alert button
Mar 05, 2020
Mitchell A. Gordon, Kevin Duh

Figure 1 for Distill, Adapt, Distill: Training Small, In-Domain Models for Neural Machine Translation
Figure 2 for Distill, Adapt, Distill: Training Small, In-Domain Models for Neural Machine Translation
Figure 3 for Distill, Adapt, Distill: Training Small, In-Domain Models for Neural Machine Translation
Figure 4 for Distill, Adapt, Distill: Training Small, In-Domain Models for Neural Machine Translation
Viaarxiv icon

Machine Translation System Selection from Bandit Feedback

Add code
Bookmark button
Alert button
Feb 22, 2020
Jason Naradowsky, Xuan Zhang, Kevin Duh

Figure 1 for Machine Translation System Selection from Bandit Feedback
Figure 2 for Machine Translation System Selection from Bandit Feedback
Figure 3 for Machine Translation System Selection from Bandit Feedback
Figure 4 for Machine Translation System Selection from Bandit Feedback
Viaarxiv icon

Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning

Add code
Bookmark button
Alert button
Feb 19, 2020
Mitchell A. Gordon, Kevin Duh, Nicholas Andrews

Figure 1 for Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning
Figure 2 for Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning
Figure 3 for Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning
Figure 4 for Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning
Viaarxiv icon

Explaining Sequence-Level Knowledge Distillation as Data-Augmentation for Neural Machine Translation

Add code
Bookmark button
Alert button
Dec 06, 2019
Mitchell A. Gordon, Kevin Duh

Figure 1 for Explaining Sequence-Level Knowledge Distillation as Data-Augmentation for Neural Machine Translation
Figure 2 for Explaining Sequence-Level Knowledge Distillation as Data-Augmentation for Neural Machine Translation
Figure 3 for Explaining Sequence-Level Knowledge Distillation as Data-Augmentation for Neural Machine Translation
Figure 4 for Explaining Sequence-Level Knowledge Distillation as Data-Augmentation for Neural Machine Translation
Viaarxiv icon

Multilingual End-to-End Speech Translation

Add code
Bookmark button
Alert button
Oct 31, 2019
Hirofumi Inaguma, Kevin Duh, Tatsuya Kawahara, Shinji Watanabe

Figure 1 for Multilingual End-to-End Speech Translation
Figure 2 for Multilingual End-to-End Speech Translation
Figure 3 for Multilingual End-to-End Speech Translation
Figure 4 for Multilingual End-to-End Speech Translation
Viaarxiv icon