Alert button
Picture for Yinhan Liu

Yinhan Liu

Alert button

Hierarchical Learning for Generation with Long Source Sequences

Add code
Bookmark button
Alert button
Apr 15, 2021
Tobias Rohde, Xiaoxia Wu, Yinhan Liu

Figure 1 for Hierarchical Learning for Generation with Long Source Sequences
Figure 2 for Hierarchical Learning for Generation with Long Source Sequences
Figure 3 for Hierarchical Learning for Generation with Long Source Sequences
Figure 4 for Hierarchical Learning for Generation with Long Source Sequences
Viaarxiv icon

Recipes for building an open-domain chatbot

Add code
Bookmark button
Alert button
Apr 30, 2020
Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston

Figure 1 for Recipes for building an open-domain chatbot
Figure 2 for Recipes for building an open-domain chatbot
Figure 3 for Recipes for building an open-domain chatbot
Figure 4 for Recipes for building an open-domain chatbot
Viaarxiv icon

Multilingual Denoising Pre-training for Neural Machine Translation

Add code
Bookmark button
Alert button
Jan 23, 2020
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer

Figure 1 for Multilingual Denoising Pre-training for Neural Machine Translation
Figure 2 for Multilingual Denoising Pre-training for Neural Machine Translation
Figure 3 for Multilingual Denoising Pre-training for Neural Machine Translation
Figure 4 for Multilingual Denoising Pre-training for Neural Machine Translation
Viaarxiv icon

BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

Add code
Bookmark button
Alert button
Oct 29, 2019
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer

Figure 1 for BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Figure 2 for BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Figure 3 for BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Figure 4 for BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Viaarxiv icon

SpanBERT: Improving Pre-training by Representing and Predicting Spans

Add code
Bookmark button
Alert button
Jul 31, 2019
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, Omer Levy

Figure 1 for SpanBERT: Improving Pre-training by Representing and Predicting Spans
Figure 2 for SpanBERT: Improving Pre-training by Representing and Predicting Spans
Figure 3 for SpanBERT: Improving Pre-training by Representing and Predicting Spans
Figure 4 for SpanBERT: Improving Pre-training by Representing and Predicting Spans
Viaarxiv icon

RoBERTa: A Robustly Optimized BERT Pretraining Approach

Add code
Bookmark button
Alert button
Jul 26, 2019
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov

Figure 1 for RoBERTa: A Robustly Optimized BERT Pretraining Approach
Figure 2 for RoBERTa: A Robustly Optimized BERT Pretraining Approach
Figure 3 for RoBERTa: A Robustly Optimized BERT Pretraining Approach
Figure 4 for RoBERTa: A Robustly Optimized BERT Pretraining Approach
Viaarxiv icon

Constant-Time Machine Translation with Conditional Masked Language Models

Add code
Bookmark button
Alert button
Apr 19, 2019
Marjan Ghazvininejad, Omer Levy, Yinhan Liu, Luke Zettlemoyer

Figure 1 for Constant-Time Machine Translation with Conditional Masked Language Models
Figure 2 for Constant-Time Machine Translation with Conditional Masked Language Models
Figure 3 for Constant-Time Machine Translation with Conditional Masked Language Models
Figure 4 for Constant-Time Machine Translation with Conditional Masked Language Models
Viaarxiv icon

Cloze-driven Pretraining of Self-attention Networks

Add code
Bookmark button
Alert button
Mar 19, 2019
Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, Michael Auli

Figure 1 for Cloze-driven Pretraining of Self-attention Networks
Figure 2 for Cloze-driven Pretraining of Self-attention Networks
Figure 3 for Cloze-driven Pretraining of Self-attention Networks
Figure 4 for Cloze-driven Pretraining of Self-attention Networks
Viaarxiv icon