Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained Models


Jun 08, 2022
Mengzhou Xia , Mikel Artetxe , Jingfei Du , Danqi Chen , Ves Stoyanov

* The code is available at https://github.com/facebookresearch/ELECTRA-Fewshot-Learning 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Principled Paraphrase Generation with Parallel Corpora


May 24, 2022
Aitor Ormazabal , Mikel Artetxe , Gorka Labaka , Aitor Soroa , Eneko Agirre

* ACL 2022 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

PoeLM: A Meter- and Rhyme-Controllable Language Model for Unsupervised Poetry Generation


May 24, 2022
Aitor Ormazabal , Mikel Artetxe , Manex Agirrezabal , Aitor Soroa , Eneko Agirre


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

On the Role of Bidirectionality in Language Model Pre-Training


May 24, 2022
Mikel Artetxe , Jingfei Du , Naman Goyal , Luke Zettlemoyer , Ves Stoyanov


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Multilingual Machine Translation with Hyper-Adapters


May 22, 2022
Christos Baziotis , Mikel Artetxe , James Cross , Shruti Bhosale


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Lifting the Curse of Multilinguality by Pre-training Modular Transformers


May 12, 2022
Jonas Pfeiffer , Naman Goyal , Xi Victoria Lin , Xian Li , James Cross , Sebastian Riedel , Mikel Artetxe

* NAACL 2022 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

OPT: Open Pre-trained Transformer Language Models


May 05, 2022
Susan Zhang , Stephen Roller , Naman Goyal , Mikel Artetxe , Moya Chen , Shuohui Chen , Christopher Dewan , Mona Diab , Xian Li , Xi Victoria Lin , Todor Mihaylov , Myle Ott , Sam Shleifer , Kurt Shuster , Daniel Simig , Punit Singh Koura , Anjali Sridhar , Tianlu Wang , Luke Zettlemoyer


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Efficient Language Modeling with Sparse all-MLP


Mar 16, 2022
Ping Yu , Mikel Artetxe , Myle Ott , Sam Shleifer , Hongyu Gong , Ves Stoyanov , Xian Li


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Does Corpus Quality Really Matter for Low-Resource Languages?


Mar 15, 2022
Mikel Artetxe , Itziar Aldabe , Rodrigo Agerri , Olatz Perez-de-Viñaspre , Aitor Soroa


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email
1
2
3
4
>>