Alert button
Picture for Daniel Khashabi

Daniel Khashabi

Alert button

Do pretrained Transformers Really Learn In-context by Gradient Descent?

Add code
Bookmark button
Alert button
Oct 12, 2023
Lingfeng Shen, Aayush Mishra, Daniel Khashabi

Figure 1 for Do pretrained Transformers Really Learn In-context by Gradient Descent?
Figure 2 for Do pretrained Transformers Really Learn In-context by Gradient Descent?
Figure 3 for Do pretrained Transformers Really Learn In-context by Gradient Descent?
Figure 4 for Do pretrained Transformers Really Learn In-context by Gradient Descent?
Viaarxiv icon

SemStamp: A Semantic Watermark with Paraphrastic Robustness for Text Generation

Add code
Bookmark button
Alert button
Oct 06, 2023
Abe Bohan Hou, Jingyu Zhang, Tianxing He, Yichen Wang, Yung-Sung Chuang, Hongwei Wang, Lingfeng Shen, Benjamin Van Durme, Daniel Khashabi, Yulia Tsvetkov

Figure 1 for SemStamp: A Semantic Watermark with Paraphrastic Robustness for Text Generation
Figure 2 for SemStamp: A Semantic Watermark with Paraphrastic Robustness for Text Generation
Figure 3 for SemStamp: A Semantic Watermark with Paraphrastic Robustness for Text Generation
Figure 4 for SemStamp: A Semantic Watermark with Paraphrastic Robustness for Text Generation
Viaarxiv icon

Error Norm Truncation: Robust Training in the Presence of Data Noise for Text Generation Models

Add code
Bookmark button
Alert button
Oct 02, 2023
Tianjian Li, Haoran Xu, Philipp Koehn, Daniel Khashabi, Kenton Murray

Figure 1 for Error Norm Truncation: Robust Training in the Presence of Data Noise for Text Generation Models
Figure 2 for Error Norm Truncation: Robust Training in the Presence of Data Noise for Text Generation Models
Figure 3 for Error Norm Truncation: Robust Training in the Presence of Data Noise for Text Generation Models
Figure 4 for Error Norm Truncation: Robust Training in the Presence of Data Noise for Text Generation Models
Viaarxiv icon

The Trickle-down Impact of Reward (In-)consistency on RLHF

Add code
Bookmark button
Alert button
Sep 28, 2023
Lingfeng Shen, Sihao Chen, Linfeng Song, Lifeng Jin, Baolin Peng, Haitao Mi, Daniel Khashabi, Dong Yu

Figure 1 for The Trickle-down Impact of Reward (In-)consistency on RLHF
Figure 2 for The Trickle-down Impact of Reward (In-)consistency on RLHF
Figure 3 for The Trickle-down Impact of Reward (In-)consistency on RLHF
Figure 4 for The Trickle-down Impact of Reward (In-)consistency on RLHF
Viaarxiv icon

GEAR: Augmenting Language Models with Generalizable and Efficient Tool Resolution

Add code
Bookmark button
Alert button
Jul 17, 2023
Yining Lu, Haoping Yu, Daniel Khashabi

Figure 1 for GEAR: Augmenting Language Models with Generalizable and Efficient Tool Resolution
Figure 2 for GEAR: Augmenting Language Models with Generalizable and Efficient Tool Resolution
Figure 3 for GEAR: Augmenting Language Models with Generalizable and Efficient Tool Resolution
Figure 4 for GEAR: Augmenting Language Models with Generalizable and Efficient Tool Resolution
Viaarxiv icon

"According to ..." Prompting Language Models Improves Quoting from Pre-Training Data

Add code
Bookmark button
Alert button
May 22, 2023
Orion Weller, Marc Marone, Nathaniel Weir, Dawn Lawrie, Daniel Khashabi, Benjamin Van Durme

Figure 1 for "According to ..." Prompting Language Models Improves Quoting from Pre-Training Data
Figure 2 for "According to ..." Prompting Language Models Improves Quoting from Pre-Training Data
Figure 3 for "According to ..." Prompting Language Models Improves Quoting from Pre-Training Data
Figure 4 for "According to ..." Prompting Language Models Improves Quoting from Pre-Training Data
Viaarxiv icon

Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency

Add code
Bookmark button
Alert button
May 18, 2023
Lingfeng Shen, Weiting Tan, Boyuan Zheng, Daniel Khashabi

Figure 1 for Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency
Figure 2 for Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency
Figure 3 for Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency
Figure 4 for Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency
Viaarxiv icon

Self-Instruct: Aligning Language Model with Self Generated Instructions

Add code
Bookmark button
Alert button
Dec 20, 2022
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, Hannaneh Hajishirzi

Figure 1 for Self-Instruct: Aligning Language Model with Self Generated Instructions
Figure 2 for Self-Instruct: Aligning Language Model with Self Generated Instructions
Figure 3 for Self-Instruct: Aligning Language Model with Self Generated Instructions
Figure 4 for Self-Instruct: Aligning Language Model with Self Generated Instructions
Viaarxiv icon

When Not to Trust Language Models: Investigating Effectiveness and Limitations of Parametric and Non-Parametric Memories

Add code
Bookmark button
Alert button
Dec 20, 2022
Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Hannaneh Hajishirzi, Daniel Khashabi

Figure 1 for When Not to Trust Language Models: Investigating Effectiveness and Limitations of Parametric and Non-Parametric Memories
Figure 2 for When Not to Trust Language Models: Investigating Effectiveness and Limitations of Parametric and Non-Parametric Memories
Figure 3 for When Not to Trust Language Models: Investigating Effectiveness and Limitations of Parametric and Non-Parametric Memories
Figure 4 for When Not to Trust Language Models: Investigating Effectiveness and Limitations of Parametric and Non-Parametric Memories
Viaarxiv icon

Generating Sequences by Learning to Self-Correct

Add code
Bookmark button
Alert button
Oct 31, 2022
Sean Welleck, Ximing Lu, Peter West, Faeze Brahman, Tianxiao Shen, Daniel Khashabi, Yejin Choi

Figure 1 for Generating Sequences by Learning to Self-Correct
Figure 2 for Generating Sequences by Learning to Self-Correct
Figure 3 for Generating Sequences by Learning to Self-Correct
Figure 4 for Generating Sequences by Learning to Self-Correct
Viaarxiv icon