Alert button
Picture for Graham Neubig

Graham Neubig

Alert button

A Multi-dimensional Evaluation of Tokenizer-free Multilingual Pretrained Models

Add code
Bookmark button
Alert button
Oct 13, 2022
Jimin Sun, Patrick Fernandes, Xinyi Wang, Graham Neubig

Figure 1 for A Multi-dimensional Evaluation of Tokenizer-free Multilingual Pretrained Models
Figure 2 for A Multi-dimensional Evaluation of Tokenizer-free Multilingual Pretrained Models
Figure 3 for A Multi-dimensional Evaluation of Tokenizer-free Multilingual Pretrained Models
Figure 4 for A Multi-dimensional Evaluation of Tokenizer-free Multilingual Pretrained Models
Viaarxiv icon

CTC Alignments Improve Autoregressive Translation

Add code
Bookmark button
Alert button
Oct 11, 2022
Brian Yan, Siddharth Dalmia, Yosuke Higuchi, Graham Neubig, Florian Metze, Alan W Black, Shinji Watanabe

Figure 1 for CTC Alignments Improve Autoregressive Translation
Figure 2 for CTC Alignments Improve Autoregressive Translation
Figure 3 for CTC Alignments Improve Autoregressive Translation
Figure 4 for CTC Alignments Improve Autoregressive Translation
Viaarxiv icon

Understanding and Improving Zero-shot Multi-hop Reasoning in Generative Question Answering

Add code
Bookmark button
Alert button
Oct 09, 2022
Zhengbao Jiang, Jun Araki, Haibo Ding, Graham Neubig

Figure 1 for Understanding and Improving Zero-shot Multi-hop Reasoning in Generative Question Answering
Figure 2 for Understanding and Improving Zero-shot Multi-hop Reasoning in Generative Question Answering
Figure 3 for Understanding and Improving Zero-shot Multi-hop Reasoning in Generative Question Answering
Figure 4 for Understanding and Improving Zero-shot Multi-hop Reasoning in Generative Question Answering
Viaarxiv icon

Are Representations Built from the Ground Up? An Empirical Examination of Local Composition in Language Models

Add code
Bookmark button
Alert button
Oct 07, 2022
Emmy Liu, Graham Neubig

Figure 1 for Are Representations Built from the Ground Up? An Empirical Examination of Local Composition in Language Models
Figure 2 for Are Representations Built from the Ground Up? An Empirical Examination of Local Composition in Language Models
Figure 3 for Are Representations Built from the Ground Up? An Empirical Examination of Local Composition in Language Models
Figure 4 for Are Representations Built from the Ground Up? An Empirical Examination of Local Composition in Language Models
Viaarxiv icon

Mega: Moving Average Equipped Gated Attention

Add code
Bookmark button
Alert button
Sep 26, 2022
Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, Luke Zettlemoyer

Figure 1 for Mega: Moving Average Equipped Gated Attention
Figure 2 for Mega: Moving Average Equipped Gated Attention
Figure 3 for Mega: Moving Average Equipped Gated Attention
Figure 4 for Mega: Moving Average Equipped Gated Attention
Viaarxiv icon

KGxBoard: Explainable and Interactive Leaderboard for Evaluation of Knowledge Graph Completion Models

Add code
Bookmark button
Alert button
Aug 23, 2022
Haris Widjaja, Kiril Gashteovski, Wiem Ben Rim, Pengfei Liu, Christopher Malon, Daniel Ruffinelli, Carolin Lawrence, Graham Neubig

Figure 1 for KGxBoard: Explainable and Interactive Leaderboard for Evaluation of Knowledge Graph Completion Models
Figure 2 for KGxBoard: Explainable and Interactive Leaderboard for Evaluation of Knowledge Graph Completion Models
Figure 3 for KGxBoard: Explainable and Interactive Leaderboard for Evaluation of Knowledge Graph Completion Models
Figure 4 for KGxBoard: Explainable and Interactive Leaderboard for Evaluation of Knowledge Graph Completion Models
Viaarxiv icon

DocCoder: Generating Code by Retrieving and Reading Docs

Add code
Bookmark button
Alert button
Jul 13, 2022
Shuyan Zhou, Uri Alon, Frank F. Xu, Zhengbao JIang, Graham Neubig

Figure 1 for DocCoder: Generating Code by Retrieving and Reading Docs
Figure 2 for DocCoder: Generating Code by Retrieving and Reading Docs
Figure 3 for DocCoder: Generating Code by Retrieving and Reading Docs
Figure 4 for DocCoder: Generating Code by Retrieving and Reading Docs
Viaarxiv icon

OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering

Add code
Bookmark button
Alert button
Jul 08, 2022
Zhengbao Jiang, Yi Mao, Pengcheng He, Graham Neubig, Weizhu Chen

Figure 1 for OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering
Figure 2 for OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering
Figure 3 for OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering
Figure 4 for OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering
Viaarxiv icon

Building African Voices

Add code
Bookmark button
Alert button
Jul 01, 2022
Perez Ogayo, Graham Neubig, Alan W Black

Figure 1 for Building African Voices
Figure 2 for Building African Voices
Figure 3 for Building African Voices
Viaarxiv icon

Teacher Perception of Automatically Extracted Grammar Concepts for L2 Language Learning

Add code
Bookmark button
Alert button
Jun 10, 2022
Aditi Chaudhary, Arun Sampath, Ashwin Sheshadri, Antonios Anastasopoulos, Graham Neubig

Figure 1 for Teacher Perception of Automatically Extracted Grammar Concepts for L2 Language Learning
Figure 2 for Teacher Perception of Automatically Extracted Grammar Concepts for L2 Language Learning
Figure 3 for Teacher Perception of Automatically Extracted Grammar Concepts for L2 Language Learning
Figure 4 for Teacher Perception of Automatically Extracted Grammar Concepts for L2 Language Learning
Viaarxiv icon