Alert button
Picture for Scott Yih

Scott Yih

Alert button

In-Context Pretraining: Language Modeling Beyond Document Boundaries

Add code
Bookmark button
Alert button
Oct 20, 2023
Weijia Shi, Sewon Min, Maria Lomeli, Chunting Zhou, Margaret Li, Xi Victoria Lin, Noah A. Smith, Luke Zettlemoyer, Scott Yih, Mike Lewis

Figure 1 for In-Context Pretraining: Language Modeling Beyond Document Boundaries
Figure 2 for In-Context Pretraining: Language Modeling Beyond Document Boundaries
Figure 3 for In-Context Pretraining: Language Modeling Beyond Document Boundaries
Figure 4 for In-Context Pretraining: Language Modeling Beyond Document Boundaries
Viaarxiv icon

RA-DIT: Retrieval-Augmented Dual Instruction Tuning

Add code
Bookmark button
Alert button
Oct 08, 2023
Xi Victoria Lin, Xilun Chen, Mingda Chen, Weijia Shi, Maria Lomeli, Rich James, Pedro Rodriguez, Jacob Kahn, Gergely Szilvasy, Mike Lewis, Luke Zettlemoyer, Scott Yih

Figure 1 for RA-DIT: Retrieval-Augmented Dual Instruction Tuning
Figure 2 for RA-DIT: Retrieval-Augmented Dual Instruction Tuning
Figure 3 for RA-DIT: Retrieval-Augmented Dual Instruction Tuning
Figure 4 for RA-DIT: Retrieval-Augmented Dual Instruction Tuning
Viaarxiv icon

Reimagining Retrieval Augmented Language Models for Answering Queries

Add code
Bookmark button
Alert button
Jun 01, 2023
Wang-Chiew Tan, Yuliang Li, Pedro Rodriguez, Richard James, Xi Victoria Lin, Alon Halevy, Scott Yih

Figure 1 for Reimagining Retrieval Augmented Language Models for Answering Queries
Figure 2 for Reimagining Retrieval Augmented Language Models for Answering Queries
Figure 3 for Reimagining Retrieval Augmented Language Models for Answering Queries
Figure 4 for Reimagining Retrieval Augmented Language Models for Answering Queries
Viaarxiv icon

BiT: Robustly Binarized Multi-distilled Transformer

Add code
Bookmark button
Alert button
May 25, 2022
Zechun Liu, Barlas Oguz, Aasish Pappu, Lin Xiao, Scott Yih, Meng Li, Raghuraman Krishnamoorthi, Yashar Mehdad

Figure 1 for BiT: Robustly Binarized Multi-distilled Transformer
Figure 2 for BiT: Robustly Binarized Multi-distilled Transformer
Figure 3 for BiT: Robustly Binarized Multi-distilled Transformer
Figure 4 for BiT: Robustly Binarized Multi-distilled Transformer
Viaarxiv icon

Unified Open-Domain Question Answering with Structured and Unstructured Knowledge

Add code
Bookmark button
Alert button
Dec 29, 2020
Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, Scott Yih

Figure 1 for Unified Open-Domain Question Answering with Structured and Unstructured Knowledge
Figure 2 for Unified Open-Domain Question Answering with Structured and Unstructured Knowledge
Figure 3 for Unified Open-Domain Question Answering with Structured and Unstructured Knowledge
Figure 4 for Unified Open-Domain Question Answering with Structured and Unstructured Knowledge
Viaarxiv icon