Alert button
Picture for Ves Stoyanov

Ves Stoyanov

Alert button

Efficient Language Modeling with Sparse all-MLP

Mar 14, 2022
Ping Yu, Mikel Artetxe, Myle Ott, Sam Shleifer, Hongyu Gong, Ves Stoyanov, Xian Li

Figure 1 for Efficient Language Modeling with Sparse all-MLP
Figure 2 for Efficient Language Modeling with Sparse all-MLP
Figure 3 for Efficient Language Modeling with Sparse all-MLP
Figure 4 for Efficient Language Modeling with Sparse all-MLP
Viaarxiv icon

Efficient Large Scale Language Modeling with Mixtures of Experts

Dec 20, 2021
Mikel Artetxe, Shruti Bhosale, Naman Goyal, Todor Mihaylov, Myle Ott, Sam Shleifer, Xi Victoria Lin, Jingfei Du, Srinivasan Iyer, Ramakanth Pasunuru, Giri Anantharaman, Xian Li, Shuohui Chen, Halil Akin, Mandeep Baines, Louis Martin, Xing Zhou, Punit Singh Koura, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Mona Diab, Zornitsa Kozareva, Ves Stoyanov

Figure 1 for Efficient Large Scale Language Modeling with Mixtures of Experts
Figure 2 for Efficient Large Scale Language Modeling with Mixtures of Experts
Figure 3 for Efficient Large Scale Language Modeling with Mixtures of Experts
Figure 4 for Efficient Large Scale Language Modeling with Mixtures of Experts
Viaarxiv icon

Supervised Contrastive Learning for Pre-trained Language Model Fine-tuning

Nov 12, 2020
Beliz Gunel, Jingfei Du, Alexis Conneau, Ves Stoyanov

Figure 1 for Supervised Contrastive Learning for Pre-trained Language Model Fine-tuning
Figure 2 for Supervised Contrastive Learning for Pre-trained Language Model Fine-tuning
Figure 3 for Supervised Contrastive Learning for Pre-trained Language Model Fine-tuning
Figure 4 for Supervised Contrastive Learning for Pre-trained Language Model Fine-tuning
Viaarxiv icon

Self-training Improves Pre-training for Natural Language Understanding

Oct 05, 2020
Jingfei Du, Edouard Grave, Beliz Gunel, Vishrav Chaudhary, Onur Celebi, Michael Auli, Ves Stoyanov, Alexis Conneau

Figure 1 for Self-training Improves Pre-training for Natural Language Understanding
Figure 2 for Self-training Improves Pre-training for Natural Language Understanding
Figure 3 for Self-training Improves Pre-training for Natural Language Understanding
Figure 4 for Self-training Improves Pre-training for Natural Language Understanding
Viaarxiv icon

Conversational Semantic Parsing

Sep 28, 2020
Armen Aghajanyan, Jean Maillard, Akshat Shrivastava, Keith Diedrick, Mike Haeger, Haoran Li, Yashar Mehdad, Ves Stoyanov, Anuj Kumar, Mike Lewis, Sonal Gupta

Figure 1 for Conversational Semantic Parsing
Figure 2 for Conversational Semantic Parsing
Figure 3 for Conversational Semantic Parsing
Figure 4 for Conversational Semantic Parsing
Viaarxiv icon

Preserving Integrity in Online Social Networks

Sep 25, 2020
Alon Halevy, Cristian Canton Ferrer, Hao Ma, Umut Ozertem, Patrick Pantel, Marzieh Saeidi, Fabrizio Silvestri, Ves Stoyanov

Figure 1 for Preserving Integrity in Online Social Networks
Figure 2 for Preserving Integrity in Online Social Networks
Figure 3 for Preserving Integrity in Online Social Networks
Figure 4 for Preserving Integrity in Online Social Networks
Viaarxiv icon

BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

Oct 29, 2019
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer

Figure 1 for BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Figure 2 for BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Figure 3 for BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Figure 4 for BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Viaarxiv icon