Alert button
Picture for Lifeng Jin

Lifeng Jin

Alert button

Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing

Add code
Bookmark button
Alert button
Apr 18, 2024
Ye Tian, Baolin Peng, Linfeng Song, Lifeng Jin, Dian Yu, Haitao Mi, Dong Yu

Viaarxiv icon

Entropy Guided Extrapolative Decoding to Improve Factuality in Large Language Models

Add code
Bookmark button
Alert button
Apr 14, 2024
Souvik Das, Lifeng Jin, Linfeng Song, Haitao Mi, Baolin Peng, Dong Yu

Viaarxiv icon

Self-Consistency Boosts Calibration for Math Reasoning

Add code
Bookmark button
Alert button
Mar 14, 2024
Ante Wang, Linfeng Song, Ye Tian, Baolin Peng, Lifeng Jin, Haitao Mi, Jinsong Su, Dong Yu

Figure 1 for Self-Consistency Boosts Calibration for Math Reasoning
Figure 2 for Self-Consistency Boosts Calibration for Math Reasoning
Figure 3 for Self-Consistency Boosts Calibration for Math Reasoning
Figure 4 for Self-Consistency Boosts Calibration for Math Reasoning
Viaarxiv icon

A Knowledge Plug-and-Play Test Bed for Open-domain Dialogue Generation

Add code
Bookmark button
Alert button
Mar 06, 2024
Xiangci Li, Linfeng Song, Lifeng Jin, Haitao Mi, Jessica Ouyang, Dong Yu

Figure 1 for A Knowledge Plug-and-Play Test Bed for Open-domain Dialogue Generation
Figure 2 for A Knowledge Plug-and-Play Test Bed for Open-domain Dialogue Generation
Figure 3 for A Knowledge Plug-and-Play Test Bed for Open-domain Dialogue Generation
Figure 4 for A Knowledge Plug-and-Play Test Bed for Open-domain Dialogue Generation
Viaarxiv icon

Collaborative decoding of critical tokens for boosting factuality of large language models

Add code
Bookmark button
Alert button
Feb 28, 2024
Lifeng Jin, Baolin Peng, Linfeng Song, Haitao Mi, Ye Tian, Dong Yu

Viaarxiv icon

Fine-Grained Self-Endorsement Improves Factuality and Reasoning

Add code
Bookmark button
Alert button
Feb 23, 2024
Ante Wang, Linfeng Song, Baolin Peng, Ye Tian, Lifeng Jin, Haitao Mi, Jinsong Su, Dong Yu

Figure 1 for Fine-Grained Self-Endorsement Improves Factuality and Reasoning
Figure 2 for Fine-Grained Self-Endorsement Improves Factuality and Reasoning
Figure 3 for Fine-Grained Self-Endorsement Improves Factuality and Reasoning
Figure 4 for Fine-Grained Self-Endorsement Improves Factuality and Reasoning
Viaarxiv icon

Self-Alignment for Factuality: Mitigating Hallucinations in LLMs via Self-Evaluation

Add code
Bookmark button
Alert button
Feb 14, 2024
Xiaoying Zhang, Baolin Peng, Ye Tian, Jingyan Zhou, Lifeng Jin, Linfeng Song, Haitao Mi, Helen Meng

Viaarxiv icon

Inconsistent dialogue responses and how to recover from them

Add code
Bookmark button
Alert button
Jan 18, 2024
Mian Zhang, Lifeng Jin, Linfeng Song, Haitao Mi, Dong Yu

Viaarxiv icon

TencentLLMEval: A Hierarchical Evaluation of Real-World Capabilities for Human-Aligned LLMs

Add code
Bookmark button
Alert button
Nov 09, 2023
Shuyi Xie, Wenlin Yao, Yong Dai, Shaobo Wang, Donlin Zhou, Lifeng Jin, Xinhua Feng, Pengzhi Wei, Yujie Lin, Zhichao Hu, Dong Yu, Zhengyou Zhang, Jing Nie, Yuhong Liu

Viaarxiv icon

The Trickle-down Impact of Reward (In-)consistency on RLHF

Add code
Bookmark button
Alert button
Sep 28, 2023
Lingfeng Shen, Sihao Chen, Linfeng Song, Lifeng Jin, Baolin Peng, Haitao Mi, Daniel Khashabi, Dong Yu

Figure 1 for The Trickle-down Impact of Reward (In-)consistency on RLHF
Figure 2 for The Trickle-down Impact of Reward (In-)consistency on RLHF
Figure 3 for The Trickle-down Impact of Reward (In-)consistency on RLHF
Figure 4 for The Trickle-down Impact of Reward (In-)consistency on RLHF
Viaarxiv icon