Alert button
Picture for Mikel Artetxe

Mikel Artetxe

Alert button

Lifting the Curse of Multilinguality by Pre-training Modular Transformers

Add code
Bookmark button
Alert button
May 12, 2022
Jonas Pfeiffer, Naman Goyal, Xi Victoria Lin, Xian Li, James Cross, Sebastian Riedel, Mikel Artetxe

Figure 1 for Lifting the Curse of Multilinguality by Pre-training Modular Transformers
Figure 2 for Lifting the Curse of Multilinguality by Pre-training Modular Transformers
Figure 3 for Lifting the Curse of Multilinguality by Pre-training Modular Transformers
Figure 4 for Lifting the Curse of Multilinguality by Pre-training Modular Transformers
Viaarxiv icon

OPT: Open Pre-trained Transformer Language Models

Add code
Bookmark button
Alert button
May 05, 2022
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, Luke Zettlemoyer

Figure 1 for OPT: Open Pre-trained Transformer Language Models
Figure 2 for OPT: Open Pre-trained Transformer Language Models
Figure 3 for OPT: Open Pre-trained Transformer Language Models
Figure 4 for OPT: Open Pre-trained Transformer Language Models
Viaarxiv icon

Efficient Language Modeling with Sparse all-MLP

Add code
Bookmark button
Alert button
Mar 16, 2022
Ping Yu, Mikel Artetxe, Myle Ott, Sam Shleifer, Hongyu Gong, Ves Stoyanov, Xian Li

Figure 1 for Efficient Language Modeling with Sparse all-MLP
Figure 2 for Efficient Language Modeling with Sparse all-MLP
Figure 3 for Efficient Language Modeling with Sparse all-MLP
Figure 4 for Efficient Language Modeling with Sparse all-MLP
Viaarxiv icon

Does Corpus Quality Really Matter for Low-Resource Languages?

Add code
Bookmark button
Alert button
Mar 15, 2022
Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri, Olatz Perez-de-Viñaspre, Aitor Soroa

Figure 1 for Does Corpus Quality Really Matter for Low-Resource Languages?
Figure 2 for Does Corpus Quality Really Matter for Low-Resource Languages?
Figure 3 for Does Corpus Quality Really Matter for Low-Resource Languages?
Figure 4 for Does Corpus Quality Really Matter for Low-Resource Languages?
Viaarxiv icon

Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?

Add code
Bookmark button
Alert button
Feb 25, 2022
Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer

Figure 1 for Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Figure 2 for Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Figure 3 for Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Figure 4 for Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Viaarxiv icon

Efficient Large Scale Language Modeling with Mixtures of Experts

Add code
Bookmark button
Alert button
Dec 20, 2021
Mikel Artetxe, Shruti Bhosale, Naman Goyal, Todor Mihaylov, Myle Ott, Sam Shleifer, Xi Victoria Lin, Jingfei Du, Srinivasan Iyer, Ramakanth Pasunuru, Giri Anantharaman, Xian Li, Shuohui Chen, Halil Akin, Mandeep Baines, Louis Martin, Xing Zhou, Punit Singh Koura, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Mona Diab, Zornitsa Kozareva, Ves Stoyanov

Figure 1 for Efficient Large Scale Language Modeling with Mixtures of Experts
Figure 2 for Efficient Large Scale Language Modeling with Mixtures of Experts
Figure 3 for Efficient Large Scale Language Modeling with Mixtures of Experts
Figure 4 for Efficient Large Scale Language Modeling with Mixtures of Experts
Viaarxiv icon

Few-shot Learning with Multilingual Language Models

Add code
Bookmark button
Alert button
Dec 20, 2021
Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li

Figure 1 for Few-shot Learning with Multilingual Language Models
Figure 2 for Few-shot Learning with Multilingual Language Models
Figure 3 for Few-shot Learning with Multilingual Language Models
Figure 4 for Few-shot Learning with Multilingual Language Models
Viaarxiv icon

PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining

Add code
Bookmark button
Alert button
Aug 04, 2021
Machel Reid, Mikel Artetxe

Figure 1 for PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining
Figure 2 for PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining
Figure 3 for PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining
Figure 4 for PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining
Viaarxiv icon

Unsupervised Multilingual Sentence Embeddings for Parallel Corpus Mining

Add code
Bookmark button
Alert button
May 21, 2021
Ivana Kvapilıkova, Mikel Artetxe, Gorka Labaka, Eneko Agirre, Ondřej Bojar

Figure 1 for Unsupervised Multilingual Sentence Embeddings for Parallel Corpus Mining
Figure 2 for Unsupervised Multilingual Sentence Embeddings for Parallel Corpus Mining
Figure 3 for Unsupervised Multilingual Sentence Embeddings for Parallel Corpus Mining
Figure 4 for Unsupervised Multilingual Sentence Embeddings for Parallel Corpus Mining
Viaarxiv icon