Alert button
Picture for Xi Victoria Lin

Xi Victoria Lin

Alert button

In-Context Pretraining: Language Modeling Beyond Document Boundaries

Oct 20, 2023
Weijia Shi, Sewon Min, Maria Lomeli, Chunting Zhou, Margaret Li, Xi Victoria Lin, Noah A. Smith, Luke Zettlemoyer, Scott Yih, Mike Lewis

Figure 1 for In-Context Pretraining: Language Modeling Beyond Document Boundaries
Figure 2 for In-Context Pretraining: Language Modeling Beyond Document Boundaries
Figure 3 for In-Context Pretraining: Language Modeling Beyond Document Boundaries
Figure 4 for In-Context Pretraining: Language Modeling Beyond Document Boundaries
Viaarxiv icon

RA-DIT: Retrieval-Augmented Dual Instruction Tuning

Oct 08, 2023
Xi Victoria Lin, Xilun Chen, Mingda Chen, Weijia Shi, Maria Lomeli, Rich James, Pedro Rodriguez, Jacob Kahn, Gergely Szilvasy, Mike Lewis, Luke Zettlemoyer, Scott Yih

Figure 1 for RA-DIT: Retrieval-Augmented Dual Instruction Tuning
Figure 2 for RA-DIT: Retrieval-Augmented Dual Instruction Tuning
Figure 3 for RA-DIT: Retrieval-Augmented Dual Instruction Tuning
Figure 4 for RA-DIT: Retrieval-Augmented Dual Instruction Tuning
Viaarxiv icon

Reimagining Retrieval Augmented Language Models for Answering Queries

Jun 01, 2023
Wang-Chiew Tan, Yuliang Li, Pedro Rodriguez, Richard James, Xi Victoria Lin, Alon Halevy, Scott Yih

Figure 1 for Reimagining Retrieval Augmented Language Models for Answering Queries
Figure 2 for Reimagining Retrieval Augmented Language Models for Answering Queries
Figure 3 for Reimagining Retrieval Augmented Language Models for Answering Queries
Figure 4 for Reimagining Retrieval Augmented Language Models for Answering Queries
Viaarxiv icon

Towards A Unified View of Sparse Feed-Forward Network in Pretraining Large Language Model

May 23, 2023
Leo Z. Liu, Tim Dettmers, Xi Victoria Lin, Veselin Stoyanov, Xian Li

Figure 1 for Towards A Unified View of Sparse Feed-Forward Network in Pretraining Large Language Model
Figure 2 for Towards A Unified View of Sparse Feed-Forward Network in Pretraining Large Language Model
Figure 3 for Towards A Unified View of Sparse Feed-Forward Network in Pretraining Large Language Model
Figure 4 for Towards A Unified View of Sparse Feed-Forward Network in Pretraining Large Language Model
Viaarxiv icon

LEVER: Learning to Verify Language-to-Code Generation with Execution

Feb 16, 2023
Ansong Ni, Srini Iyer, Dragomir Radev, Ves Stoyanov, Wen-tau Yih, Sida I. Wang, Xi Victoria Lin

Figure 1 for LEVER: Learning to Verify Language-to-Code Generation with Execution
Figure 2 for LEVER: Learning to Verify Language-to-Code Generation with Execution
Figure 3 for LEVER: Learning to Verify Language-to-Code Generation with Execution
Figure 4 for LEVER: Learning to Verify Language-to-Code Generation with Execution
Viaarxiv icon

OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization

Dec 28, 2022
Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, Xian Li, Brian O'Horo, Gabriel Pereyra, Jeff Wang, Christopher Dewan, Asli Celikyilmaz, Luke Zettlemoyer, Ves Stoyanov

Figure 1 for OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization
Figure 2 for OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization
Figure 3 for OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization
Figure 4 for OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization
Viaarxiv icon

Training Trajectories of Language Models Across Scales

Dec 19, 2022
Mengzhou Xia, Mikel Artetxe, Chunting Zhou, Xi Victoria Lin, Ramakanth Pasunuru, Danqi Chen, Luke Zettlemoyer, Ves Stoyanov

Figure 1 for Training Trajectories of Language Models Across Scales
Figure 2 for Training Trajectories of Language Models Across Scales
Figure 3 for Training Trajectories of Language Models Across Scales
Figure 4 for Training Trajectories of Language Models Across Scales
Viaarxiv icon

FOLIO: Natural Language Reasoning with First-Order Logic

Sep 02, 2022
Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhenting Qi, Martin Riddell, Luke Benson, Lucy Sun, Ekaterina Zubova, Yujie Qiao, Matthew Burtell, David Peng, Jonathan Fan, Yixin Liu, Brian Wong, Malcolm Sailor, Ansong Ni, Linyong Nan, Jungo Kasai, Tao Yu, Rui Zhang, Shafiq Joty, Alexander R. Fabbri, Wojciech Kryscinski, Xi Victoria Lin, Caiming Xiong, Dragomir Radev

Figure 1 for FOLIO: Natural Language Reasoning with First-Order Logic
Figure 2 for FOLIO: Natural Language Reasoning with First-Order Logic
Figure 3 for FOLIO: Natural Language Reasoning with First-Order Logic
Figure 4 for FOLIO: Natural Language Reasoning with First-Order Logic
Viaarxiv icon

Lifting the Curse of Multilinguality by Pre-training Modular Transformers

May 12, 2022
Jonas Pfeiffer, Naman Goyal, Xi Victoria Lin, Xian Li, James Cross, Sebastian Riedel, Mikel Artetxe

Figure 1 for Lifting the Curse of Multilinguality by Pre-training Modular Transformers
Figure 2 for Lifting the Curse of Multilinguality by Pre-training Modular Transformers
Figure 3 for Lifting the Curse of Multilinguality by Pre-training Modular Transformers
Figure 4 for Lifting the Curse of Multilinguality by Pre-training Modular Transformers
Viaarxiv icon

OPT: Open Pre-trained Transformer Language Models

May 05, 2022
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, Luke Zettlemoyer

Figure 1 for OPT: Open Pre-trained Transformer Language Models
Figure 2 for OPT: Open Pre-trained Transformer Language Models
Figure 3 for OPT: Open Pre-trained Transformer Language Models
Figure 4 for OPT: Open Pre-trained Transformer Language Models
Viaarxiv icon