Alert button
Picture for Machel Reid

Machel Reid

Alert button

A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for African News Translation

Add code
Bookmark button
Alert button
May 04, 2022
David Ifeoluwa Adelani, Jesujoba Oluwadara Alabi, Angela Fan, Julia Kreutzer, Xiaoyu Shen, Machel Reid, Dana Ruiter, Dietrich Klakow, Peter Nabende, Ernie Chang, Tajuddeen Gwadabe, Freshia Sackey, Bonaventure F. P. Dossou, Chris Chinenye Emezue, Colin Leong, Michael Beukman, Shamsuddeen Hassan Muhammad, Guyo Dub Jarso, Oreen Yousuf, Andre Niyongabo Rubungo, Gilles Hacheme, Eric Peter Wairagala, Muhammad Umair Nasir, Benjamin Ayoade Ajibade, Tunde Oluwaseyi Ajayi, Yvonne Wambui Gitau, Jade Abbott, Mohamed Ahmed, Millicent Ochieng, Anuoluwapo Aremu, Perez Ogayo, Jonathan Mukiibi, Fatoumata Ouoba Kabore, Godson Koffi Kalipe, Derguene Mbaye, Allahsera Auguste Tapo, Victoire Memdjokam Koagne, Edwin Munkoh-Buabeng, Valencia Wagner, Idris Abdulmumin, Ayodele Awokoya, Happy Buzaaba, Blessing Sibanda, Andiswa Bukula, Sam Manthalu

Figure 1 for A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for African News Translation
Figure 2 for A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for African News Translation
Figure 3 for A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for African News Translation
Figure 4 for A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for African News Translation
Viaarxiv icon

Can Wikipedia Help Offline Reinforcement Learning?

Add code
Bookmark button
Alert button
Jan 28, 2022
Machel Reid, Yutaro Yamada, Shixiang Shane Gu

Figure 1 for Can Wikipedia Help Offline Reinforcement Learning?
Figure 2 for Can Wikipedia Help Offline Reinforcement Learning?
Figure 3 for Can Wikipedia Help Offline Reinforcement Learning?
Figure 4 for Can Wikipedia Help Offline Reinforcement Learning?
Viaarxiv icon

AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages

Add code
Bookmark button
Alert button
Sep 10, 2021
Machel Reid, Junjie Hu, Graham Neubig, Yutaka Matsuo

Figure 1 for AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages
Figure 2 for AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages
Figure 3 for AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages
Figure 4 for AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages
Viaarxiv icon

PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining

Add code
Bookmark button
Alert button
Aug 04, 2021
Machel Reid, Mikel Artetxe

Figure 1 for PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining
Figure 2 for PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining
Figure 3 for PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining
Figure 4 for PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining
Viaarxiv icon

LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer

Add code
Bookmark button
Alert button
May 18, 2021
Machel Reid, Victor Zhong

Figure 1 for LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer
Figure 2 for LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer
Figure 3 for LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer
Figure 4 for LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer
Viaarxiv icon

Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers

Add code
Bookmark button
Alert button
Jan 01, 2021
Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo

Figure 1 for Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers
Figure 2 for Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers
Figure 3 for Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers
Figure 4 for Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers
Viaarxiv icon

VCDM: Leveraging Variational Bi-encoding and Deep Contextualized Word Representations for Improved Definition Modeling

Add code
Bookmark button
Alert button
Oct 07, 2020
Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo

Figure 1 for VCDM: Leveraging Variational Bi-encoding and Deep Contextualized Word Representations for Improved Definition Modeling
Figure 2 for VCDM: Leveraging Variational Bi-encoding and Deep Contextualized Word Representations for Improved Definition Modeling
Figure 3 for VCDM: Leveraging Variational Bi-encoding and Deep Contextualized Word Representations for Improved Definition Modeling
Figure 4 for VCDM: Leveraging Variational Bi-encoding and Deep Contextualized Word Representations for Improved Definition Modeling
Viaarxiv icon

Variational Inference for Learning Representations of Natural Language Edits

Add code
Bookmark button
Alert button
May 18, 2020
Edison Marrese-Taylor, Machel Reid, Yutaka Matsuo

Figure 1 for Variational Inference for Learning Representations of Natural Language Edits
Figure 2 for Variational Inference for Learning Representations of Natural Language Edits
Figure 3 for Variational Inference for Learning Representations of Natural Language Edits
Viaarxiv icon

Combining Pretrained High-Resource Embeddings and Subword Representations for Low-Resource Languages

Add code
Bookmark button
Alert button
Mar 11, 2020
Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo

Figure 1 for Combining Pretrained High-Resource Embeddings and Subword Representations for Low-Resource Languages
Figure 2 for Combining Pretrained High-Resource Embeddings and Subword Representations for Low-Resource Languages
Viaarxiv icon