Alert button
Picture for Baolin Peng

Baolin Peng

Alert button

Collaborative decoding of critical tokens for boosting factuality of large language models

Feb 28, 2024
Lifeng Jin, Baolin Peng, Linfeng Song, Haitao Mi, Ye Tian, Dong Yu

Viaarxiv icon

Fine-Grained Self-Endorsement Improves Factuality and Reasoning

Feb 23, 2024
Ante Wang, Linfeng Song, Baolin Peng, Ye Tian, Lifeng Jin, Haitao Mi, Jinsong Su, Dong Yu

Viaarxiv icon

Self-Alignment for Factuality: Mitigating Hallucinations in LLMs via Self-Evaluation

Feb 14, 2024
Xiaoying Zhang, Baolin Peng, Ye Tian, Jingyan Zhou, Lifeng Jin, Linfeng Song, Haitao Mi, Helen Meng

Viaarxiv icon

Sub-Sentence Encoder: Contrastive Learning of Propositional Semantic Representations

Nov 07, 2023
Sihao Chen, Hongming Zhang, Tong Chen, Ben Zhou, Wenhao Yu, Dian Yu, Baolin Peng, Hongwei Wang, Dan Roth, Dong Yu

Figure 1 for Sub-Sentence Encoder: Contrastive Learning of Propositional Semantic Representations
Figure 2 for Sub-Sentence Encoder: Contrastive Learning of Propositional Semantic Representations
Figure 3 for Sub-Sentence Encoder: Contrastive Learning of Propositional Semantic Representations
Figure 4 for Sub-Sentence Encoder: Contrastive Learning of Propositional Semantic Representations
Viaarxiv icon

Teaching Language Models to Self-Improve through Interactive Demonstrations

Oct 20, 2023
Xiao Yu, Baolin Peng, Michel Galley, Jianfeng Gao, Zhou Yu

Figure 1 for Teaching Language Models to Self-Improve through Interactive Demonstrations
Figure 2 for Teaching Language Models to Self-Improve through Interactive Demonstrations
Figure 3 for Teaching Language Models to Self-Improve through Interactive Demonstrations
Figure 4 for Teaching Language Models to Self-Improve through Interactive Demonstrations
Viaarxiv icon

The Trickle-down Impact of Reward (In-)consistency on RLHF

Sep 28, 2023
Lingfeng Shen, Sihao Chen, Linfeng Song, Lifeng Jin, Baolin Peng, Haitao Mi, Daniel Khashabi, Dong Yu

Figure 1 for The Trickle-down Impact of Reward (In-)consistency on RLHF
Figure 2 for The Trickle-down Impact of Reward (In-)consistency on RLHF
Figure 3 for The Trickle-down Impact of Reward (In-)consistency on RLHF
Figure 4 for The Trickle-down Impact of Reward (In-)consistency on RLHF
Viaarxiv icon

Stabilizing RLHF through Advantage Model and Selective Rehearsal

Sep 18, 2023
Baolin Peng, Linfeng Song, Ye Tian, Lifeng Jin, Haitao Mi, Dong Yu

Figure 1 for Stabilizing RLHF through Advantage Model and Selective Rehearsal
Figure 2 for Stabilizing RLHF through Advantage Model and Selective Rehearsal
Figure 3 for Stabilizing RLHF through Advantage Model and Selective Rehearsal
Figure 4 for Stabilizing RLHF through Advantage Model and Selective Rehearsal
Viaarxiv icon

Do you really follow me? Adversarial Instructions for Evaluating the Robustness of Large Language Models

Aug 17, 2023
Zekun Li, Baolin Peng, Pengcheng He, Xifeng Yan

Figure 1 for Do you really follow me? Adversarial Instructions for Evaluating the Robustness of Large Language Models
Figure 2 for Do you really follow me? Adversarial Instructions for Evaluating the Robustness of Large Language Models
Figure 3 for Do you really follow me? Adversarial Instructions for Evaluating the Robustness of Large Language Models
Figure 4 for Do you really follow me? Adversarial Instructions for Evaluating the Robustness of Large Language Models
Viaarxiv icon

Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models

May 24, 2023
Miaoran Li, Baolin Peng, Zhu Zhang

Figure 1 for Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models
Figure 2 for Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models
Figure 3 for Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models
Figure 4 for Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models
Viaarxiv icon

SGP-TOD: Building Task Bots Effortlessly via Schema-Guided LLM Prompting

May 15, 2023
Xiaoying Zhang, Baolin Peng, Kun Li, Jingyan Zhou, Helen Meng

Figure 1 for SGP-TOD: Building Task Bots Effortlessly via Schema-Guided LLM Prompting
Figure 2 for SGP-TOD: Building Task Bots Effortlessly via Schema-Guided LLM Prompting
Figure 3 for SGP-TOD: Building Task Bots Effortlessly via Schema-Guided LLM Prompting
Figure 4 for SGP-TOD: Building Task Bots Effortlessly via Schema-Guided LLM Prompting
Viaarxiv icon