Alert button
Picture for Lingfeng Shen

Lingfeng Shen

Alert button

AnaloBench: Benchmarking the Identification of Abstract and Long-context Analogies

Add code
Bookmark button
Alert button
Feb 19, 2024
Xiao Ye, Andrew Wang, Jacob Choi, Yining Lu, Shreya Sharma, Lingfeng Shen, Vijay Tiyyala, Nicholas Andrews, Daniel Khashabi

Viaarxiv icon

Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation

Add code
Bookmark button
Alert button
Feb 02, 2024
Haoran Xu, Amr Sharaf, Yunmo Chen, Weiting Tan, Lingfeng Shen, Benjamin Van Durme, Kenton Murray, Young Jin Kim

Viaarxiv icon

The Language Barrier: Dissecting Safety Challenges of LLMs in Multilingual Contexts

Add code
Bookmark button
Alert button
Jan 23, 2024
Lingfeng Shen, Weiting Tan, Sihao Chen, Yunmo Chen, Jingyu Zhang, Haoran Xu, Boyuan Zheng, Philipp Koehn, Daniel Khashabi

Viaarxiv icon

Narrowing the Gap between Zero- and Few-shot Machine Translation by Matching Styles

Add code
Bookmark button
Alert button
Nov 04, 2023
Weiting Tan, Haoran Xu, Lingfeng Shen, Shuyue Stella Li, Kenton Murray, Philipp Koehn, Benjamin Van Durme, Yunmo Chen

Viaarxiv icon

Do pretrained Transformers Really Learn In-context by Gradient Descent?

Add code
Bookmark button
Alert button
Oct 12, 2023
Lingfeng Shen, Aayush Mishra, Daniel Khashabi

Figure 1 for Do pretrained Transformers Really Learn In-context by Gradient Descent?
Figure 2 for Do pretrained Transformers Really Learn In-context by Gradient Descent?
Figure 3 for Do pretrained Transformers Really Learn In-context by Gradient Descent?
Figure 4 for Do pretrained Transformers Really Learn In-context by Gradient Descent?
Viaarxiv icon

SemStamp: A Semantic Watermark with Paraphrastic Robustness for Text Generation

Add code
Bookmark button
Alert button
Oct 06, 2023
Abe Bohan Hou, Jingyu Zhang, Tianxing He, Yichen Wang, Yung-Sung Chuang, Hongwei Wang, Lingfeng Shen, Benjamin Van Durme, Daniel Khashabi, Yulia Tsvetkov

Figure 1 for SemStamp: A Semantic Watermark with Paraphrastic Robustness for Text Generation
Figure 2 for SemStamp: A Semantic Watermark with Paraphrastic Robustness for Text Generation
Figure 3 for SemStamp: A Semantic Watermark with Paraphrastic Robustness for Text Generation
Figure 4 for SemStamp: A Semantic Watermark with Paraphrastic Robustness for Text Generation
Viaarxiv icon

The Trickle-down Impact of Reward (In-)consistency on RLHF

Add code
Bookmark button
Alert button
Sep 28, 2023
Lingfeng Shen, Sihao Chen, Linfeng Song, Lifeng Jin, Baolin Peng, Haitao Mi, Daniel Khashabi, Dong Yu

Figure 1 for The Trickle-down Impact of Reward (In-)consistency on RLHF
Figure 2 for The Trickle-down Impact of Reward (In-)consistency on RLHF
Figure 3 for The Trickle-down Impact of Reward (In-)consistency on RLHF
Figure 4 for The Trickle-down Impact of Reward (In-)consistency on RLHF
Viaarxiv icon

Sen2Pro: A Probabilistic Perspective to Sentence Embedding from Pre-trained Language Model

Add code
Bookmark button
Alert button
Jun 04, 2023
Lingfeng Shen, Haiyun Jiang, Lemao Liu, Shuming Shi

Figure 1 for Sen2Pro: A Probabilistic Perspective to Sentence Embedding from Pre-trained Language Model
Figure 2 for Sen2Pro: A Probabilistic Perspective to Sentence Embedding from Pre-trained Language Model
Figure 3 for Sen2Pro: A Probabilistic Perspective to Sentence Embedding from Pre-trained Language Model
Figure 4 for Sen2Pro: A Probabilistic Perspective to Sentence Embedding from Pre-trained Language Model
Viaarxiv icon

Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency

Add code
Bookmark button
Alert button
May 18, 2023
Lingfeng Shen, Weiting Tan, Boyuan Zheng, Daniel Khashabi

Figure 1 for Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency
Figure 2 for Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency
Figure 3 for Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency
Figure 4 for Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency
Viaarxiv icon