Alert button
Picture for Kyunghyun Cho

Kyunghyun Cho

Alert button

Consistency of a Recurrent Language Model With Respect to Incomplete Decoding

Add code
Bookmark button
Alert button
Feb 06, 2020
Sean Welleck, Ilia Kulikov, Jaedeok Kim, Richard Yuanzhe Pang, Kyunghyun Cho

Figure 1 for Consistency of a Recurrent Language Model With Respect to Incomplete Decoding
Figure 2 for Consistency of a Recurrent Language Model With Respect to Incomplete Decoding
Figure 3 for Consistency of a Recurrent Language Model With Respect to Incomplete Decoding
Figure 4 for Consistency of a Recurrent Language Model With Respect to Incomplete Decoding
Viaarxiv icon

Don't Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training

Add code
Bookmark button
Alert button
Nov 10, 2019
Margaret Li, Stephen Roller, Ilia Kulikov, Sean Welleck, Y-Lan Boureau, Kyunghyun Cho, Jason Weston

Figure 1 for Don't Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training
Figure 2 for Don't Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training
Figure 3 for Don't Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training
Figure 4 for Don't Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training
Viaarxiv icon

Multi-Stage Document Ranking with BERT

Add code
Bookmark button
Alert button
Oct 31, 2019
Rodrigo Nogueira, Wei Yang, Kyunghyun Cho, Jimmy Lin

Figure 1 for Multi-Stage Document Ranking with BERT
Figure 2 for Multi-Stage Document Ranking with BERT
Figure 3 for Multi-Stage Document Ranking with BERT
Figure 4 for Multi-Stage Document Ranking with BERT
Viaarxiv icon

Mix-review: Alleviate Forgetting in the Pretrain-Finetune Framework for Neural Language Generation Models

Add code
Bookmark button
Alert button
Oct 29, 2019
Tianxing He, Jun Liu, Kyunghyun Cho, Myle Ott, Bing Liu, James Glass, Fuchun Peng

Figure 1 for Mix-review: Alleviate Forgetting in the Pretrain-Finetune Framework for Neural Language Generation Models
Figure 2 for Mix-review: Alleviate Forgetting in the Pretrain-Finetune Framework for Neural Language Generation Models
Figure 3 for Mix-review: Alleviate Forgetting in the Pretrain-Finetune Framework for Neural Language Generation Models
Figure 4 for Mix-review: Alleviate Forgetting in the Pretrain-Finetune Framework for Neural Language Generation Models
Viaarxiv icon

Capacity, Bandwidth, and Compositionality in Emergent Language Learning

Add code
Bookmark button
Alert button
Oct 24, 2019
Cinjon Resnick, Abhinav Gupta, Jakob Foerster, Andrew M. Dai, Kyunghyun Cho

Figure 1 for Capacity, Bandwidth, and Compositionality in Emergent Language Learning
Figure 2 for Capacity, Bandwidth, and Compositionality in Emergent Language Learning
Figure 3 for Capacity, Bandwidth, and Compositionality in Emergent Language Learning
Figure 4 for Capacity, Bandwidth, and Compositionality in Emergent Language Learning
Viaarxiv icon

Generalized Inner Loop Meta-Learning

Add code
Bookmark button
Alert button
Oct 07, 2019
Edward Grefenstette, Brandon Amos, Denis Yarats, Phu Mon Htut, Artem Molchanov, Franziska Meier, Douwe Kiela, Kyunghyun Cho, Soumith Chintala

Figure 1 for Generalized Inner Loop Meta-Learning
Figure 2 for Generalized Inner Loop Meta-Learning
Figure 3 for Generalized Inner Loop Meta-Learning
Figure 4 for Generalized Inner Loop Meta-Learning
Viaarxiv icon

Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models

Add code
Bookmark button
Alert button
Sep 25, 2019
Cheolhyoung Lee, Kyunghyun Cho, Wanmo Kang

Figure 1 for Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models
Figure 2 for Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models
Figure 3 for Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models
Figure 4 for Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models
Viaarxiv icon