Picture for Kevin Duh

Kevin Duh

Leveraging End-to-End ASR for Endangered Language Documentation: An Empirical Study on Yoloxóchitl Mixtec

Add code
Feb 26, 2021
Figure 1 for Leveraging End-to-End ASR for Endangered Language Documentation: An Empirical Study on Yoloxóchitl Mixtec
Figure 2 for Leveraging End-to-End ASR for Endangered Language Documentation: An Empirical Study on Yoloxóchitl Mixtec
Figure 3 for Leveraging End-to-End ASR for Endangered Language Documentation: An Empirical Study on Yoloxóchitl Mixtec
Figure 4 for Leveraging End-to-End ASR for Endangered Language Documentation: An Empirical Study on Yoloxóchitl Mixtec
Viaarxiv icon

Orthros: Non-autoregressive End-to-end Speech Translation with Dual-decoder

Add code
Nov 06, 2020
Figure 1 for Orthros: Non-autoregressive End-to-end Speech Translation with Dual-decoder
Figure 2 for Orthros: Non-autoregressive End-to-end Speech Translation with Dual-decoder
Figure 3 for Orthros: Non-autoregressive End-to-end Speech Translation with Dual-decoder
Figure 4 for Orthros: Non-autoregressive End-to-end Speech Translation with Dual-decoder
Viaarxiv icon

Very Deep Transformers for Neural Machine Translation

Add code
Aug 18, 2020
Figure 1 for Very Deep Transformers for Neural Machine Translation
Figure 2 for Very Deep Transformers for Neural Machine Translation
Figure 3 for Very Deep Transformers for Neural Machine Translation
Figure 4 for Very Deep Transformers for Neural Machine Translation
Viaarxiv icon

Modeling Document Interactions for Learning to Rank with Regularized Self-Attention

Add code
May 08, 2020
Figure 1 for Modeling Document Interactions for Learning to Rank with Regularized Self-Attention
Figure 2 for Modeling Document Interactions for Learning to Rank with Regularized Self-Attention
Figure 3 for Modeling Document Interactions for Learning to Rank with Regularized Self-Attention
Figure 4 for Modeling Document Interactions for Learning to Rank with Regularized Self-Attention
Viaarxiv icon

ESPnet-ST: All-in-One Speech Translation Toolkit

Add code
Apr 21, 2020
Figure 1 for ESPnet-ST: All-in-One Speech Translation Toolkit
Figure 2 for ESPnet-ST: All-in-One Speech Translation Toolkit
Figure 3 for ESPnet-ST: All-in-One Speech Translation Toolkit
Figure 4 for ESPnet-ST: All-in-One Speech Translation Toolkit
Viaarxiv icon

When Does Unsupervised Machine Translation Work?

Add code
Apr 14, 2020
Figure 1 for When Does Unsupervised Machine Translation Work?
Figure 2 for When Does Unsupervised Machine Translation Work?
Figure 3 for When Does Unsupervised Machine Translation Work?
Figure 4 for When Does Unsupervised Machine Translation Work?
Viaarxiv icon

Distill, Adapt, Distill: Training Small, In-Domain Models for Neural Machine Translation

Add code
Mar 05, 2020
Figure 1 for Distill, Adapt, Distill: Training Small, In-Domain Models for Neural Machine Translation
Figure 2 for Distill, Adapt, Distill: Training Small, In-Domain Models for Neural Machine Translation
Figure 3 for Distill, Adapt, Distill: Training Small, In-Domain Models for Neural Machine Translation
Figure 4 for Distill, Adapt, Distill: Training Small, In-Domain Models for Neural Machine Translation
Viaarxiv icon

Machine Translation System Selection from Bandit Feedback

Add code
Feb 22, 2020
Figure 1 for Machine Translation System Selection from Bandit Feedback
Figure 2 for Machine Translation System Selection from Bandit Feedback
Figure 3 for Machine Translation System Selection from Bandit Feedback
Figure 4 for Machine Translation System Selection from Bandit Feedback
Viaarxiv icon

Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning

Add code
Feb 19, 2020
Figure 1 for Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning
Figure 2 for Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning
Figure 3 for Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning
Figure 4 for Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning
Viaarxiv icon

Explaining Sequence-Level Knowledge Distillation as Data-Augmentation for Neural Machine Translation

Add code
Dec 06, 2019
Figure 1 for Explaining Sequence-Level Knowledge Distillation as Data-Augmentation for Neural Machine Translation
Figure 2 for Explaining Sequence-Level Knowledge Distillation as Data-Augmentation for Neural Machine Translation
Figure 3 for Explaining Sequence-Level Knowledge Distillation as Data-Augmentation for Neural Machine Translation
Figure 4 for Explaining Sequence-Level Knowledge Distillation as Data-Augmentation for Neural Machine Translation
Viaarxiv icon