Alert button
Picture for Wen-tau Yih

Wen-tau Yih

Alert button

Instruction-tuned Language Models are Better Knowledge Learners

Feb 20, 2024
Zhengbao Jiang, Zhiqing Sun, Weijia Shi, Pedro Rodriguez, Chunting Zhou, Graham Neubig, Xi Victoria Lin, Wen-tau Yih, Srinivasan Iyer

Viaarxiv icon

Expand, Rerank, and Retrieve: Query Reranking for Open-Domain Question Answering

May 26, 2023
Yung-Sung Chuang, Wei Fang, Shang-Wen Li, Wen-tau Yih, James Glass

Figure 1 for Expand, Rerank, and Retrieve: Query Reranking for Open-Domain Question Answering
Figure 2 for Expand, Rerank, and Retrieve: Query Reranking for Open-Domain Question Answering
Figure 3 for Expand, Rerank, and Retrieve: Query Reranking for Open-Domain Question Answering
Figure 4 for Expand, Rerank, and Retrieve: Query Reranking for Open-Domain Question Answering
Viaarxiv icon

FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation

May 23, 2023
Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, Hannaneh Hajishirzi

Figure 1 for FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation
Figure 2 for FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation
Figure 3 for FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation
Figure 4 for FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation
Viaarxiv icon

Efficient Open Domain Multi-Hop Question Answering with Few-Shot Data Synthesis

May 23, 2023
Mingda Chen, Xilun Chen, Wen-tau Yih

Figure 1 for Efficient Open Domain Multi-Hop Question Answering with Few-Shot Data Synthesis
Figure 2 for Efficient Open Domain Multi-Hop Question Answering with Few-Shot Data Synthesis
Figure 3 for Efficient Open Domain Multi-Hop Question Answering with Few-Shot Data Synthesis
Figure 4 for Efficient Open Domain Multi-Hop Question Answering with Few-Shot Data Synthesis
Viaarxiv icon

Learning to Simulate Natural Language Feedback for Interactive Semantic Parsing

May 14, 2023
Hao Yan, Saurabh Srivastava, Yintao Tai, Sida I. Wang, Wen-tau Yih, Ziyu Yao

Figure 1 for Learning to Simulate Natural Language Feedback for Interactive Semantic Parsing
Figure 2 for Learning to Simulate Natural Language Feedback for Interactive Semantic Parsing
Figure 3 for Learning to Simulate Natural Language Feedback for Interactive Semantic Parsing
Figure 4 for Learning to Simulate Natural Language Feedback for Interactive Semantic Parsing
Viaarxiv icon

Large Language Model Programs

May 09, 2023
Imanol Schlag, Sainbayar Sukhbaatar, Asli Celikyilmaz, Wen-tau Yih, Jason Weston, Jürgen Schmidhuber, Xian Li

Figure 1 for Large Language Model Programs
Figure 2 for Large Language Model Programs
Figure 3 for Large Language Model Programs
Figure 4 for Large Language Model Programs
Viaarxiv icon

VideoOFA: Two-Stage Pre-Training for Video-to-Text Generation

May 04, 2023
Xilun Chen, Lili Yu, Wenhan Xiong, Barlas Oğuz, Yashar Mehdad, Wen-tau Yih

Figure 1 for VideoOFA: Two-Stage Pre-Training for Video-to-Text Generation
Figure 2 for VideoOFA: Two-Stage Pre-Training for Video-to-Text Generation
Figure 3 for VideoOFA: Two-Stage Pre-Training for Video-to-Text Generation
Figure 4 for VideoOFA: Two-Stage Pre-Training for Video-to-Text Generation
Viaarxiv icon

LEVER: Learning to Verify Language-to-Code Generation with Execution

Feb 16, 2023
Ansong Ni, Srini Iyer, Dragomir Radev, Ves Stoyanov, Wen-tau Yih, Sida I. Wang, Xi Victoria Lin

Figure 1 for LEVER: Learning to Verify Language-to-Code Generation with Execution
Figure 2 for LEVER: Learning to Verify Language-to-Code Generation with Execution
Figure 3 for LEVER: Learning to Verify Language-to-Code Generation with Execution
Figure 4 for LEVER: Learning to Verify Language-to-Code Generation with Execution
Viaarxiv icon

How to Train Your DRAGON: Diverse Augmentation Towards Generalizable Dense Retrieval

Feb 15, 2023
Sheng-Chieh Lin, Akari Asai, Minghan Li, Barlas Oguz, Jimmy Lin, Yashar Mehdad, Wen-tau Yih, Xilun Chen

Figure 1 for How to Train Your DRAGON: Diverse Augmentation Towards Generalizable Dense Retrieval
Figure 2 for How to Train Your DRAGON: Diverse Augmentation Towards Generalizable Dense Retrieval
Figure 3 for How to Train Your DRAGON: Diverse Augmentation Towards Generalizable Dense Retrieval
Figure 4 for How to Train Your DRAGON: Diverse Augmentation Towards Generalizable Dense Retrieval
Viaarxiv icon

REPLUG: Retrieval-Augmented Black-Box Language Models

Feb 01, 2023
Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih

Figure 1 for REPLUG: Retrieval-Augmented Black-Box Language Models
Figure 2 for REPLUG: Retrieval-Augmented Black-Box Language Models
Figure 3 for REPLUG: Retrieval-Augmented Black-Box Language Models
Figure 4 for REPLUG: Retrieval-Augmented Black-Box Language Models
Viaarxiv icon