Alert button
Picture for Daniel Khashabi

Daniel Khashabi

Alert button

Dated Data: Tracing Knowledge Cutoffs in Large Language Models

Mar 19, 2024
Jeffrey Cheng, Marc Marone, Orion Weller, Dawn Lawrie, Daniel Khashabi, Benjamin Van Durme

Viaarxiv icon

Tur[k]ingBench: A Challenge Benchmark for Web Agents

Mar 18, 2024
Kevin Xu, Yeganeh Kordi, Kate Sanders, Yizhong Wang, Adam Byerly, Jack Zhang, Benjamin Van Durme, Daniel Khashabi

Viaarxiv icon

RORA: Robust Free-Text Rationale Evaluation

Mar 01, 2024
Zhengping Jiang, Yining Lu, Hanjie Chen, Daniel Khashabi, Benjamin Van Durme, Anqi Liu

Viaarxiv icon

AnaloBench: Benchmarking the Identification of Abstract and Long-context Analogies

Feb 19, 2024
Xiao Ye, Andrew Wang, Jacob Choi, Yining Lu, Shreya Sharma, Lingfeng Shen, Vijay Tiyyala, Nicholas Andrews, Daniel Khashabi

Viaarxiv icon

k-SemStamp: A Clustering-Based Semantic Watermark for Detection of Machine-Generated Text

Feb 17, 2024
Abe Bohan Hou, Jingyu Zhang, Yichen Wang, Daniel Khashabi, Tianxing He

Viaarxiv icon

The Language Barrier: Dissecting Safety Challenges of LLMs in Multilingual Contexts

Jan 23, 2024
Lingfeng Shen, Weiting Tan, Sihao Chen, Yunmo Chen, Jingyu Zhang, Haoran Xu, Boyuan Zheng, Philipp Koehn, Daniel Khashabi

Viaarxiv icon

Do pretrained Transformers Really Learn In-context by Gradient Descent?

Oct 12, 2023
Lingfeng Shen, Aayush Mishra, Daniel Khashabi

Figure 1 for Do pretrained Transformers Really Learn In-context by Gradient Descent?
Figure 2 for Do pretrained Transformers Really Learn In-context by Gradient Descent?
Figure 3 for Do pretrained Transformers Really Learn In-context by Gradient Descent?
Figure 4 for Do pretrained Transformers Really Learn In-context by Gradient Descent?
Viaarxiv icon

SemStamp: A Semantic Watermark with Paraphrastic Robustness for Text Generation

Oct 06, 2023
Abe Bohan Hou, Jingyu Zhang, Tianxing He, Yichen Wang, Yung-Sung Chuang, Hongwei Wang, Lingfeng Shen, Benjamin Van Durme, Daniel Khashabi, Yulia Tsvetkov

Figure 1 for SemStamp: A Semantic Watermark with Paraphrastic Robustness for Text Generation
Figure 2 for SemStamp: A Semantic Watermark with Paraphrastic Robustness for Text Generation
Figure 3 for SemStamp: A Semantic Watermark with Paraphrastic Robustness for Text Generation
Figure 4 for SemStamp: A Semantic Watermark with Paraphrastic Robustness for Text Generation
Viaarxiv icon

Error Norm Truncation: Robust Training in the Presence of Data Noise for Text Generation Models

Oct 02, 2023
Tianjian Li, Haoran Xu, Philipp Koehn, Daniel Khashabi, Kenton Murray

Figure 1 for Error Norm Truncation: Robust Training in the Presence of Data Noise for Text Generation Models
Figure 2 for Error Norm Truncation: Robust Training in the Presence of Data Noise for Text Generation Models
Figure 3 for Error Norm Truncation: Robust Training in the Presence of Data Noise for Text Generation Models
Figure 4 for Error Norm Truncation: Robust Training in the Presence of Data Noise for Text Generation Models
Viaarxiv icon