Alert button
Picture for Graham Neubig

Graham Neubig

Alert button

When Does Translation Require Context? A Data-driven, Multilingual Exploration

Add code
Bookmark button
Alert button
Sep 15, 2021
Kayo Yin, Patrick Fernandes, André F. T. Martins, Graham Neubig

Figure 1 for When Does Translation Require Context? A Data-driven, Multilingual Exploration
Figure 2 for When Does Translation Require Context? A Data-driven, Multilingual Exploration
Figure 3 for When Does Translation Require Context? A Data-driven, Multilingual Exploration
Figure 4 for When Does Translation Require Context? A Data-driven, Multilingual Exploration
Viaarxiv icon

Should We Be Pre-training? An Argument for End-task Aware Training as an Alternative

Add code
Bookmark button
Alert button
Sep 15, 2021
Lucio M. Dery, Paul Michel, Ameet Talwalkar, Graham Neubig

Figure 1 for Should We Be Pre-training? An Argument for End-task Aware Training as an Alternative
Figure 2 for Should We Be Pre-training? An Argument for End-task Aware Training as an Alternative
Figure 3 for Should We Be Pre-training? An Argument for End-task Aware Training as an Alternative
Figure 4 for Should We Be Pre-training? An Argument for End-task Aware Training as an Alternative
Viaarxiv icon

When is Wall a Pared and when a Muro? -- Extracting Rules Governing Lexical Selection

Add code
Bookmark button
Alert button
Sep 13, 2021
Aditi Chaudhary, Kayo Yin, Antonios Anastasopoulos, Graham Neubig

Figure 1 for When is Wall a Pared and when a Muro? -- Extracting Rules Governing Lexical Selection
Figure 2 for When is Wall a Pared and when a Muro? -- Extracting Rules Governing Lexical Selection
Figure 3 for When is Wall a Pared and when a Muro? -- Extracting Rules Governing Lexical Selection
Figure 4 for When is Wall a Pared and when a Muro? -- Extracting Rules Governing Lexical Selection
Viaarxiv icon

Efficient Test Time Adapter Ensembling for Low-resource Language Varieties

Add code
Bookmark button
Alert button
Sep 10, 2021
Xinyi Wang, Yulia Tsvetkov, Sebastian Ruder, Graham Neubig

Figure 1 for Efficient Test Time Adapter Ensembling for Low-resource Language Varieties
Figure 2 for Efficient Test Time Adapter Ensembling for Low-resource Language Varieties
Figure 3 for Efficient Test Time Adapter Ensembling for Low-resource Language Varieties
Figure 4 for Efficient Test Time Adapter Ensembling for Low-resource Language Varieties
Viaarxiv icon

AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages

Add code
Bookmark button
Alert button
Sep 10, 2021
Machel Reid, Junjie Hu, Graham Neubig, Yutaka Matsuo

Figure 1 for AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages
Figure 2 for AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages
Figure 3 for AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages
Figure 4 for AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages
Viaarxiv icon

Efficient Nearest Neighbor Language Models

Add code
Bookmark button
Alert button
Sep 09, 2021
Junxian He, Graham Neubig, Taylor Berg-Kirkpatrick

Figure 1 for Efficient Nearest Neighbor Language Models
Figure 2 for Efficient Nearest Neighbor Language Models
Figure 3 for Efficient Nearest Neighbor Language Models
Figure 4 for Efficient Nearest Neighbor Language Models
Viaarxiv icon

Distributionally Robust Multilingual Machine Translation

Add code
Bookmark button
Alert button
Sep 09, 2021
Chunting Zhou, Daniel Levy, Xian Li, Marjan Ghazvininejad, Graham Neubig

Figure 1 for Distributionally Robust Multilingual Machine Translation
Figure 2 for Distributionally Robust Multilingual Machine Translation
Figure 3 for Distributionally Robust Multilingual Machine Translation
Figure 4 for Distributionally Robust Multilingual Machine Translation
Viaarxiv icon

Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing

Add code
Bookmark button
Alert button
Jul 28, 2021
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, Graham Neubig

Figure 1 for Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Figure 2 for Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Figure 3 for Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Figure 4 for Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Viaarxiv icon

Few-shot Language Coordination by Modeling Theory of Mind

Add code
Bookmark button
Alert button
Jul 12, 2021
Hao Zhu, Graham Neubig, Yonatan Bisk

Figure 1 for Few-shot Language Coordination by Modeling Theory of Mind
Figure 2 for Few-shot Language Coordination by Modeling Theory of Mind
Figure 3 for Few-shot Language Coordination by Modeling Theory of Mind
Figure 4 for Few-shot Language Coordination by Modeling Theory of Mind
Viaarxiv icon

BARTScore: Evaluating Generated Text as Text Generation

Add code
Bookmark button
Alert button
Jun 22, 2021
Weizhe Yuan, Graham Neubig, Pengfei Liu

Figure 1 for BARTScore: Evaluating Generated Text as Text Generation
Figure 2 for BARTScore: Evaluating Generated Text as Text Generation
Figure 3 for BARTScore: Evaluating Generated Text as Text Generation
Figure 4 for BARTScore: Evaluating Generated Text as Text Generation
Viaarxiv icon