Picture for Machel Reid

Machel Reid

Large Language Models are Zero-Shot Reasoners

Add code
May 24, 2022
Figure 1 for Large Language Models are Zero-Shot Reasoners
Figure 2 for Large Language Models are Zero-Shot Reasoners
Figure 3 for Large Language Models are Zero-Shot Reasoners
Figure 4 for Large Language Models are Zero-Shot Reasoners
Viaarxiv icon

A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for African News Translation

Add code
May 04, 2022
Figure 1 for A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for African News Translation
Figure 2 for A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for African News Translation
Figure 3 for A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for African News Translation
Figure 4 for A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for African News Translation
Viaarxiv icon

Can Wikipedia Help Offline Reinforcement Learning?

Add code
Jan 28, 2022
Figure 1 for Can Wikipedia Help Offline Reinforcement Learning?
Figure 2 for Can Wikipedia Help Offline Reinforcement Learning?
Figure 3 for Can Wikipedia Help Offline Reinforcement Learning?
Figure 4 for Can Wikipedia Help Offline Reinforcement Learning?
Viaarxiv icon

AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages

Add code
Sep 10, 2021
Figure 1 for AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages
Figure 2 for AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages
Figure 3 for AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages
Figure 4 for AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages
Viaarxiv icon

PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining

Add code
Aug 04, 2021
Figure 1 for PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining
Figure 2 for PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining
Figure 3 for PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining
Figure 4 for PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining
Viaarxiv icon

LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer

Add code
May 18, 2021
Figure 1 for LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer
Figure 2 for LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer
Figure 3 for LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer
Figure 4 for LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer
Viaarxiv icon

Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers

Add code
Jan 01, 2021
Figure 1 for Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers
Figure 2 for Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers
Figure 3 for Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers
Figure 4 for Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers
Viaarxiv icon

VCDM: Leveraging Variational Bi-encoding and Deep Contextualized Word Representations for Improved Definition Modeling

Add code
Oct 07, 2020
Figure 1 for VCDM: Leveraging Variational Bi-encoding and Deep Contextualized Word Representations for Improved Definition Modeling
Figure 2 for VCDM: Leveraging Variational Bi-encoding and Deep Contextualized Word Representations for Improved Definition Modeling
Figure 3 for VCDM: Leveraging Variational Bi-encoding and Deep Contextualized Word Representations for Improved Definition Modeling
Figure 4 for VCDM: Leveraging Variational Bi-encoding and Deep Contextualized Word Representations for Improved Definition Modeling
Viaarxiv icon

Variational Inference for Learning Representations of Natural Language Edits

Add code
May 18, 2020
Figure 1 for Variational Inference for Learning Representations of Natural Language Edits
Figure 2 for Variational Inference for Learning Representations of Natural Language Edits
Figure 3 for Variational Inference for Learning Representations of Natural Language Edits
Viaarxiv icon

Combining Pretrained High-Resource Embeddings and Subword Representations for Low-Resource Languages

Add code
Mar 11, 2020
Figure 1 for Combining Pretrained High-Resource Embeddings and Subword Representations for Low-Resource Languages
Figure 2 for Combining Pretrained High-Resource Embeddings and Subword Representations for Low-Resource Languages
Viaarxiv icon