Picture for Chengqi Lyu

Chengqi Lyu

InternBootcamp Technical Report: Boosting LLM Reasoning with Verifiable Task Scaling

Add code
Aug 12, 2025
Viaarxiv icon

The Imitation Game: Turing Machine Imitator is Length Generalizable Reasoner

Add code
Jul 17, 2025
Viaarxiv icon

Mask-DPO: Generalizable Fine-grained Factuality Alignment of LLMs

Add code
Mar 04, 2025
Viaarxiv icon

Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning

Add code
Feb 10, 2025
Viaarxiv icon

Training Language Models to Critique With Multi-agent Feedback

Add code
Oct 20, 2024
Figure 1 for Training Language Models to Critique With Multi-agent Feedback
Figure 2 for Training Language Models to Critique With Multi-agent Feedback
Figure 3 for Training Language Models to Critique With Multi-agent Feedback
Figure 4 for Training Language Models to Critique With Multi-agent Feedback
Viaarxiv icon

ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models

Add code
Jul 05, 2024
Figure 1 for ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models
Figure 2 for ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models
Figure 3 for ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models
Figure 4 for ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models
Viaarxiv icon

ANAH: Analytical Annotation of Hallucinations in Large Language Models

Add code
May 30, 2024
Viaarxiv icon

AlchemistCoder: Harmonizing and Eliciting Code Capability by Hindsight Tuning on Multi-source Data

Add code
May 29, 2024
Viaarxiv icon

Fake Alignment: Are LLMs Really Aligned Well?

Add code
Nov 14, 2023
Viaarxiv icon

MultiModal-GPT: A Vision and Language Model for Dialogue with Humans

Add code
May 09, 2023
Figure 1 for MultiModal-GPT: A Vision and Language Model for Dialogue with Humans
Figure 2 for MultiModal-GPT: A Vision and Language Model for Dialogue with Humans
Figure 3 for MultiModal-GPT: A Vision and Language Model for Dialogue with Humans
Figure 4 for MultiModal-GPT: A Vision and Language Model for Dialogue with Humans
Viaarxiv icon