Alert button
Picture for Michael Auli

Michael Auli

Alert button

The Source-Target Domain Mismatch Problem in Machine Translation

Sep 28, 2019
Jiajun Shen, Peng-Jen Chen, Matt Le, Junxian He, Jiatao Gu, Myle Ott, Michael Auli, Marc'Aurelio Ranzato

Figure 1 for The Source-Target Domain Mismatch Problem in Machine Translation
Figure 2 for The Source-Target Domain Mismatch Problem in Machine Translation
Figure 3 for The Source-Target Domain Mismatch Problem in Machine Translation
Figure 4 for The Source-Target Domain Mismatch Problem in Machine Translation
Viaarxiv icon

Simple and Effective Noisy Channel Modeling for Neural Machine Translation

Aug 15, 2019
Kyra Yee, Nathan Ng, Yann N. Dauphin, Michael Auli

Figure 1 for Simple and Effective Noisy Channel Modeling for Neural Machine Translation
Figure 2 for Simple and Effective Noisy Channel Modeling for Neural Machine Translation
Figure 3 for Simple and Effective Noisy Channel Modeling for Neural Machine Translation
Figure 4 for Simple and Effective Noisy Channel Modeling for Neural Machine Translation
Viaarxiv icon

On The Evaluation of Machine Translation Systems Trained With Back-Translation

Aug 14, 2019
Sergey Edunov, Myle Ott, Marc'Aurelio Ranzato, Michael Auli

Figure 1 for On The Evaluation of Machine Translation Systems Trained With Back-Translation
Figure 2 for On The Evaluation of Machine Translation Systems Trained With Back-Translation
Figure 3 for On The Evaluation of Machine Translation Systems Trained With Back-Translation
Figure 4 for On The Evaluation of Machine Translation Systems Trained With Back-Translation
Viaarxiv icon

ELI5: Long Form Question Answering

Jul 22, 2019
Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, Michael Auli

Figure 1 for ELI5: Long Form Question Answering
Figure 2 for ELI5: Long Form Question Answering
Figure 3 for ELI5: Long Form Question Answering
Figure 4 for ELI5: Long Form Question Answering
Viaarxiv icon

Facebook FAIR's WMT19 News Translation Task Submission

Jul 15, 2019
Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, Sergey Edunov

Figure 1 for Facebook FAIR's WMT19 News Translation Task Submission
Figure 2 for Facebook FAIR's WMT19 News Translation Task Submission
Figure 3 for Facebook FAIR's WMT19 News Translation Task Submission
Figure 4 for Facebook FAIR's WMT19 News Translation Task Submission
Viaarxiv icon

GLOSS: Generative Latent Optimization of Sentence Representations

Jul 15, 2019
Sidak Pal Singh, Angela Fan, Michael Auli

Figure 1 for GLOSS: Generative Latent Optimization of Sentence Representations
Figure 2 for GLOSS: Generative Latent Optimization of Sentence Representations
Figure 3 for GLOSS: Generative Latent Optimization of Sentence Representations
Figure 4 for GLOSS: Generative Latent Optimization of Sentence Representations
Viaarxiv icon

wav2vec: Unsupervised Pre-training for Speech Recognition

May 24, 2019
Steffen Schneider, Alexei Baevski, Ronan Collobert, Michael Auli

Figure 1 for wav2vec: Unsupervised Pre-training for Speech Recognition
Figure 2 for wav2vec: Unsupervised Pre-training for Speech Recognition
Figure 3 for wav2vec: Unsupervised Pre-training for Speech Recognition
Figure 4 for wav2vec: Unsupervised Pre-training for Speech Recognition
Viaarxiv icon

fairseq: A Fast, Extensible Toolkit for Sequence Modeling

Apr 01, 2019
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli

Figure 1 for fairseq: A Fast, Extensible Toolkit for Sequence Modeling
Figure 2 for fairseq: A Fast, Extensible Toolkit for Sequence Modeling
Figure 3 for fairseq: A Fast, Extensible Toolkit for Sequence Modeling
Figure 4 for fairseq: A Fast, Extensible Toolkit for Sequence Modeling
Viaarxiv icon

Pre-trained Language Model Representations for Language Generation

Apr 01, 2019
Sergey Edunov, Alexei Baevski, Michael Auli

Figure 1 for Pre-trained Language Model Representations for Language Generation
Figure 2 for Pre-trained Language Model Representations for Language Generation
Figure 3 for Pre-trained Language Model Representations for Language Generation
Figure 4 for Pre-trained Language Model Representations for Language Generation
Viaarxiv icon