Alert button
Picture for Dani Yogatama

Dani Yogatama

Alert button

A Contrastive Framework for Neural Text Generation

Feb 13, 2022
Yixuan Su, Tian Lan, Yan Wang, Dani Yogatama, Lingpeng Kong, Nigel Collier

Figure 1 for A Contrastive Framework for Neural Text Generation
Figure 2 for A Contrastive Framework for Neural Text Generation
Figure 3 for A Contrastive Framework for Neural Text Generation
Figure 4 for A Contrastive Framework for Neural Text Generation
Viaarxiv icon

Relational Memory Augmented Language Models

Jan 24, 2022
Qi Liu, Dani Yogatama, Phil Blunsom

Viaarxiv icon

Balancing Average and Worst-case Accuracy in Multitask Learning

Oct 12, 2021
Paul Michel, Sebastian Ruder, Dani Yogatama

Figure 1 for Balancing Average and Worst-case Accuracy in Multitask Learning
Figure 2 for Balancing Average and Worst-case Accuracy in Multitask Learning
Figure 3 for Balancing Average and Worst-case Accuracy in Multitask Learning
Figure 4 for Balancing Average and Worst-case Accuracy in Multitask Learning
Viaarxiv icon

ABC: Attention with Bounded-memory Control

Oct 06, 2021
Hao Peng, Jungo Kasai, Nikolaos Pappas, Dani Yogatama, Zhaofeng Wu, Lingpeng Kong, Roy Schwartz, Noah A. Smith

Figure 1 for ABC: Attention with Bounded-memory Control
Figure 2 for ABC: Attention with Bounded-memory Control
Figure 3 for ABC: Attention with Bounded-memory Control
Figure 4 for ABC: Attention with Bounded-memory Control
Viaarxiv icon

Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers

Sep 22, 2021
Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler

Figure 1 for Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Figure 2 for Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Figure 3 for Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Figure 4 for Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Viaarxiv icon

End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering

Jun 09, 2021
Devendra Singh Sachan, Siva Reddy, William Hamilton, Chris Dyer, Dani Yogatama

Figure 1 for End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering
Figure 2 for End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering
Figure 3 for End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering
Figure 4 for End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering
Viaarxiv icon

Finetuning Pretrained Transformers into RNNs

Mar 24, 2021
Jungo Kasai, Hao Peng, Yizhe Zhang, Dani Yogatama, Gabriel Ilharco, Nikolaos Pappas, Yi Mao, Weizhu Chen, Noah A. Smith

Figure 1 for Finetuning Pretrained Transformers into RNNs
Figure 2 for Finetuning Pretrained Transformers into RNNs
Figure 3 for Finetuning Pretrained Transformers into RNNs
Figure 4 for Finetuning Pretrained Transformers into RNNs
Viaarxiv icon

Random Feature Attention

Mar 19, 2021
Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A. Smith, Lingpeng Kong

Figure 1 for Random Feature Attention
Figure 2 for Random Feature Attention
Figure 3 for Random Feature Attention
Figure 4 for Random Feature Attention
Viaarxiv icon

Adaptive Semiparametric Language Models

Feb 04, 2021
Dani Yogatama, Cyprien de Masson d'Autume, Lingpeng Kong

Figure 1 for Adaptive Semiparametric Language Models
Figure 2 for Adaptive Semiparametric Language Models
Figure 3 for Adaptive Semiparametric Language Models
Figure 4 for Adaptive Semiparametric Language Models
Viaarxiv icon