Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

Learning to Model Editing Processes



Machel Reid , Graham Neubig


   Access Paper or Ask Questions

Large Language Models are Zero-Shot Reasoners



Takeshi Kojima , Shixiang Shane Gu , Machel Reid , Yutaka Matsuo , Yusuke Iwasawa


   Access Paper or Ask Questions

A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for African News Translation



David Ifeoluwa Adelani , Jesujoba Oluwadara Alabi , Angela Fan , Julia Kreutzer , Xiaoyu Shen , Machel Reid , Dana Ruiter , Dietrich Klakow , Peter Nabende , Ernie Chang , Tajuddeen Gwadabe , Freshia Sackey , Bonaventure F. P. Dossou , Chris Chinenye Emezue , Colin Leong , Michael Beukman , Shamsuddeen Hassan Muhammad , Guyo Dub Jarso , Oreen Yousuf , Andre Niyongabo Rubungo , Gilles Hacheme , Eric Peter Wairagala , Muhammad Umair Nasir , Benjamin Ayoade Ajibade , Tunde Oluwaseyi Ajayi , Yvonne Wambui Gitau , Jade Abbott , Mohamed Ahmed , Millicent Ochieng , Anuoluwapo Aremu , Perez Ogayo , Jonathan Mukiibi , Fatoumata Ouoba Kabore , Godson Koffi Kalipe , Derguene Mbaye , Allahsera Auguste Tapo , Victoire Memdjokam Koagne , Edwin Munkoh-Buabeng , Valencia Wagner , Idris Abdulmumin , Ayodele Awokoya , Happy Buzaaba , Blessing Sibanda , Andiswa Bukula , Sam Manthalu

* Accepted to NAACL 2022 

   Access Paper or Ask Questions

Can Wikipedia Help Offline Reinforcement Learning?



Machel Reid , Yutaro Yamada , Shixiang Shane Gu


   Access Paper or Ask Questions

AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages



Machel Reid , Junjie Hu , Graham Neubig , Yutaka Matsuo

* EMNLP 2021 

   Access Paper or Ask Questions

PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining



Machel Reid , Mikel Artetxe

* Preprint 

   Access Paper or Ask Questions

LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer



Machel Reid , Victor Zhong

* ACL-IJCNLP 2021 (Findings) 

   Access Paper or Ask Questions

Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers



Machel Reid , Edison Marrese-Taylor , Yutaka Matsuo

* Work in progress 

   Access Paper or Ask Questions

VCDM: Leveraging Variational Bi-encoding and Deep Contextualized Word Representations for Improved Definition Modeling



Machel Reid , Edison Marrese-Taylor , Yutaka Matsuo

* EMNLP 2020, 10 Pages 

   Access Paper or Ask Questions

Variational Inference for Learning Representations of Natural Language Edits



Edison Marrese-Taylor , Machel Reid , Yutaka Matsuo

* 5th Workshop on Representation Learning for NLP (RepL4NLP-2020) 

   Access Paper or Ask Questions

1
2
>>