Alert button
Picture for Kelvin Guu

Kelvin Guu

Alert button

PURR: Efficiently Editing Language Model Hallucinations by Denoising Language Model Corruptions

May 24, 2023
Anthony Chen, Panupong Pasupat, Sameer Singh, Hongrae Lee, Kelvin Guu

Figure 1 for PURR: Efficiently Editing Language Model Hallucinations by Denoising Language Model Corruptions
Figure 2 for PURR: Efficiently Editing Language Model Hallucinations by Denoising Language Model Corruptions
Figure 3 for PURR: Efficiently Editing Language Model Hallucinations by Denoising Language Model Corruptions
Figure 4 for PURR: Efficiently Editing Language Model Hallucinations by Denoising Language Model Corruptions
Viaarxiv icon

Simfluence: Modeling the Influence of Individual Training Examples by Simulating Training Runs

Mar 14, 2023
Kelvin Guu, Albert Webson, Ellie Pavlick, Lucas Dixon, Ian Tenney, Tolga Bolukbasi

Figure 1 for Simfluence: Modeling the Influence of Individual Training Examples by Simulating Training Runs
Figure 2 for Simfluence: Modeling the Influence of Individual Training Examples by Simulating Training Runs
Figure 3 for Simfluence: Modeling the Influence of Individual Training Examples by Simulating Training Runs
Figure 4 for Simfluence: Modeling the Influence of Individual Training Examples by Simulating Training Runs
Viaarxiv icon

Meta-Learning Fast Weight Language Models

Dec 05, 2022
Kevin Clark, Kelvin Guu, Ming-Wei Chang, Panupong Pasupat, Geoffrey Hinton, Mohammad Norouzi

Figure 1 for Meta-Learning Fast Weight Language Models
Figure 2 for Meta-Learning Fast Weight Language Models
Figure 3 for Meta-Learning Fast Weight Language Models
Figure 4 for Meta-Learning Fast Weight Language Models
Viaarxiv icon

Attributed Text Generation via Post-hoc Research and Revision

Oct 17, 2022
Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Y. Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, Kelvin Guu

Figure 1 for Attributed Text Generation via Post-hoc Research and Revision
Figure 2 for Attributed Text Generation via Post-hoc Research and Revision
Figure 3 for Attributed Text Generation via Post-hoc Research and Revision
Figure 4 for Attributed Text Generation via Post-hoc Research and Revision
Viaarxiv icon

Promptagator: Few-shot Dense Retrieval From 8 Examples

Sep 23, 2022
Zhuyun Dai, Vincent Y. Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B. Hall, Ming-Wei Chang

Figure 1 for Promptagator: Few-shot Dense Retrieval From 8 Examples
Figure 2 for Promptagator: Few-shot Dense Retrieval From 8 Examples
Figure 3 for Promptagator: Few-shot Dense Retrieval From 8 Examples
Figure 4 for Promptagator: Few-shot Dense Retrieval From 8 Examples
Viaarxiv icon

Dialog Inpainting: Turning Documents into Dialogs

May 31, 2022
Zhuyun Dai, Arun Tejasvi Chaganty, Vincent Zhao, Aida Amini, Qazi Mamunur Rashid, Mike Green, Kelvin Guu

Figure 1 for Dialog Inpainting: Turning Documents into Dialogs
Figure 2 for Dialog Inpainting: Turning Documents into Dialogs
Figure 3 for Dialog Inpainting: Turning Documents into Dialogs
Figure 4 for Dialog Inpainting: Turning Documents into Dialogs
Viaarxiv icon

Tracing Knowledge in Language Models Back to the Training Data

May 24, 2022
Ekin Akyürek, Tolga Bolukbasi, Frederick Liu, Binbin Xiong, Ian Tenney, Jacob Andreas, Kelvin Guu

Figure 1 for Tracing Knowledge in Language Models Back to the Training Data
Figure 2 for Tracing Knowledge in Language Models Back to the Training Data
Figure 3 for Tracing Knowledge in Language Models Back to the Training Data
Figure 4 for Tracing Knowledge in Language Models Back to the Training Data
Viaarxiv icon

Controllable Semantic Parsing via Retrieval Augmentation

Oct 16, 2021
Panupong Pasupat, Yuan Zhang, Kelvin Guu

Figure 1 for Controllable Semantic Parsing via Retrieval Augmentation
Figure 2 for Controllable Semantic Parsing via Retrieval Augmentation
Figure 3 for Controllable Semantic Parsing via Retrieval Augmentation
Figure 4 for Controllable Semantic Parsing via Retrieval Augmentation
Viaarxiv icon

Finetuned Language Models Are Zero-Shot Learners

Sep 03, 2021
Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, Quoc V. Le

Figure 1 for Finetuned Language Models Are Zero-Shot Learners
Figure 2 for Finetuned Language Models Are Zero-Shot Learners
Figure 3 for Finetuned Language Models Are Zero-Shot Learners
Figure 4 for Finetuned Language Models Are Zero-Shot Learners
Viaarxiv icon