Alert button
Picture for Weijia Shi

Weijia Shi

Alert button

Instruction-tuned Language Models are Better Knowledge Learners

Add code
Bookmark button
Alert button
Feb 20, 2024
Zhengbao Jiang, Zhiqing Sun, Weijia Shi, Pedro Rodriguez, Chunting Zhou, Graham Neubig, Xi Victoria Lin, Wen-tau Yih, Srinivasan Iyer

Viaarxiv icon

Do Membership Inference Attacks Work on Large Language Models?

Add code
Bookmark button
Alert button
Feb 12, 2024
Michael Duan, Anshuman Suri, Niloofar Mireshghallah, Sewon Min, Weijia Shi, Luke Zettlemoyer, Yulia Tsvetkov, Yejin Choi, David Evans, Hannaneh Hajishirzi

Viaarxiv icon

Don't Hallucinate, Abstain: Identifying LLM Knowledge Gaps via Multi-LLM Collaboration

Add code
Bookmark button
Alert button
Feb 01, 2024
Shangbin Feng, Weijia Shi, Yike Wang, Wenxuan Ding, Vidhisha Balachandran, Yulia Tsvetkov

Viaarxiv icon

Detecting Pretraining Data from Large Language Models

Add code
Bookmark button
Alert button
Nov 03, 2023
Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer

Figure 1 for Detecting Pretraining Data from Large Language Models
Figure 2 for Detecting Pretraining Data from Large Language Models
Figure 3 for Detecting Pretraining Data from Large Language Models
Figure 4 for Detecting Pretraining Data from Large Language Models
Viaarxiv icon

In-Context Pretraining: Language Modeling Beyond Document Boundaries

Add code
Bookmark button
Alert button
Oct 20, 2023
Weijia Shi, Sewon Min, Maria Lomeli, Chunting Zhou, Margaret Li, Xi Victoria Lin, Noah A. Smith, Luke Zettlemoyer, Scott Yih, Mike Lewis

Figure 1 for In-Context Pretraining: Language Modeling Beyond Document Boundaries
Figure 2 for In-Context Pretraining: Language Modeling Beyond Document Boundaries
Figure 3 for In-Context Pretraining: Language Modeling Beyond Document Boundaries
Figure 4 for In-Context Pretraining: Language Modeling Beyond Document Boundaries
Viaarxiv icon

Lemur: Harmonizing Natural Language and Code for Language Agents

Add code
Bookmark button
Alert button
Oct 10, 2023
Yiheng Xu, Hongjin Su, Chen Xing, Boyu Mi, Qian Liu, Weijia Shi, Binyuan Hui, Fan Zhou, Yitao Liu, Tianbao Xie, Zhoujun Cheng, Siheng Zhao, Lingpeng Kong, Bailin Wang, Caiming Xiong, Tao Yu

Figure 1 for Lemur: Harmonizing Natural Language and Code for Language Agents
Figure 2 for Lemur: Harmonizing Natural Language and Code for Language Agents
Figure 3 for Lemur: Harmonizing Natural Language and Code for Language Agents
Figure 4 for Lemur: Harmonizing Natural Language and Code for Language Agents
Viaarxiv icon

RA-DIT: Retrieval-Augmented Dual Instruction Tuning

Add code
Bookmark button
Alert button
Oct 08, 2023
Xi Victoria Lin, Xilun Chen, Mingda Chen, Weijia Shi, Maria Lomeli, Rich James, Pedro Rodriguez, Jacob Kahn, Gergely Szilvasy, Mike Lewis, Luke Zettlemoyer, Scott Yih

Figure 1 for RA-DIT: Retrieval-Augmented Dual Instruction Tuning
Figure 2 for RA-DIT: Retrieval-Augmented Dual Instruction Tuning
Figure 3 for RA-DIT: Retrieval-Augmented Dual Instruction Tuning
Figure 4 for RA-DIT: Retrieval-Augmented Dual Instruction Tuning
Viaarxiv icon

RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective Augmentation

Add code
Bookmark button
Alert button
Oct 06, 2023
Fangyuan Xu, Weijia Shi, Eunsol Choi

Figure 1 for RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective Augmentation
Figure 2 for RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective Augmentation
Figure 3 for RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective Augmentation
Figure 4 for RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective Augmentation
Viaarxiv icon