Picture for Mikel Artetxe

Mikel Artetxe

State-of-the-art generalisation research in NLP: a taxonomy and review

Add code
Oct 10, 2022
Figure 1 for State-of-the-art generalisation research in NLP: a taxonomy and review
Figure 2 for State-of-the-art generalisation research in NLP: a taxonomy and review
Figure 3 for State-of-the-art generalisation research in NLP: a taxonomy and review
Figure 4 for State-of-the-art generalisation research in NLP: a taxonomy and review
Viaarxiv icon

Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained Models

Add code
Jun 08, 2022
Figure 1 for Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained Models
Figure 2 for Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained Models
Figure 3 for Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained Models
Figure 4 for Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained Models
Viaarxiv icon

Principled Paraphrase Generation with Parallel Corpora

Add code
May 24, 2022
Figure 1 for Principled Paraphrase Generation with Parallel Corpora
Figure 2 for Principled Paraphrase Generation with Parallel Corpora
Figure 3 for Principled Paraphrase Generation with Parallel Corpora
Figure 4 for Principled Paraphrase Generation with Parallel Corpora
Viaarxiv icon

PoeLM: A Meter- and Rhyme-Controllable Language Model for Unsupervised Poetry Generation

Add code
May 24, 2022
Figure 1 for PoeLM: A Meter- and Rhyme-Controllable Language Model for Unsupervised Poetry Generation
Figure 2 for PoeLM: A Meter- and Rhyme-Controllable Language Model for Unsupervised Poetry Generation
Figure 3 for PoeLM: A Meter- and Rhyme-Controllable Language Model for Unsupervised Poetry Generation
Figure 4 for PoeLM: A Meter- and Rhyme-Controllable Language Model for Unsupervised Poetry Generation
Viaarxiv icon

On the Role of Bidirectionality in Language Model Pre-Training

Add code
May 24, 2022
Figure 1 for On the Role of Bidirectionality in Language Model Pre-Training
Figure 2 for On the Role of Bidirectionality in Language Model Pre-Training
Figure 3 for On the Role of Bidirectionality in Language Model Pre-Training
Figure 4 for On the Role of Bidirectionality in Language Model Pre-Training
Viaarxiv icon

Multilingual Machine Translation with Hyper-Adapters

Add code
May 22, 2022
Figure 1 for Multilingual Machine Translation with Hyper-Adapters
Figure 2 for Multilingual Machine Translation with Hyper-Adapters
Figure 3 for Multilingual Machine Translation with Hyper-Adapters
Figure 4 for Multilingual Machine Translation with Hyper-Adapters
Viaarxiv icon

Lifting the Curse of Multilinguality by Pre-training Modular Transformers

Add code
May 12, 2022
Figure 1 for Lifting the Curse of Multilinguality by Pre-training Modular Transformers
Figure 2 for Lifting the Curse of Multilinguality by Pre-training Modular Transformers
Figure 3 for Lifting the Curse of Multilinguality by Pre-training Modular Transformers
Figure 4 for Lifting the Curse of Multilinguality by Pre-training Modular Transformers
Viaarxiv icon

OPT: Open Pre-trained Transformer Language Models

Add code
May 05, 2022
Figure 1 for OPT: Open Pre-trained Transformer Language Models
Figure 2 for OPT: Open Pre-trained Transformer Language Models
Figure 3 for OPT: Open Pre-trained Transformer Language Models
Figure 4 for OPT: Open Pre-trained Transformer Language Models
Viaarxiv icon

Efficient Language Modeling with Sparse all-MLP

Add code
Mar 16, 2022
Figure 1 for Efficient Language Modeling with Sparse all-MLP
Figure 2 for Efficient Language Modeling with Sparse all-MLP
Figure 3 for Efficient Language Modeling with Sparse all-MLP
Figure 4 for Efficient Language Modeling with Sparse all-MLP
Viaarxiv icon

Does Corpus Quality Really Matter for Low-Resource Languages?

Add code
Mar 15, 2022
Figure 1 for Does Corpus Quality Really Matter for Low-Resource Languages?
Figure 2 for Does Corpus Quality Really Matter for Low-Resource Languages?
Figure 3 for Does Corpus Quality Really Matter for Low-Resource Languages?
Figure 4 for Does Corpus Quality Really Matter for Low-Resource Languages?
Viaarxiv icon