Picture for Yingqian Min

Yingqian Min

ICPC-Eval: Probing the Frontiers of LLM Reasoning with Competitive Programming Contests

Add code
Jun 05, 2025
Viaarxiv icon

Towards Effective Code-Integrated Reasoning

Add code
May 30, 2025
Viaarxiv icon

R1-Searcher++: Incentivizing the Dynamic Knowledge Acquisition of LLMs via Reinforcement Learning

Add code
May 22, 2025
Viaarxiv icon

Challenging the Boundaries of Reasoning: An Olympiad-Level Math Benchmark for Large Language Models

Add code
Mar 27, 2025
Viaarxiv icon

Unlocking General Long Chain-of-Thought Reasoning Capabilities of Large Language Models via Representation Engineering

Add code
Mar 14, 2025
Viaarxiv icon

R1-Searcher: Incentivizing the Search Capability in LLMs via Reinforcement Learning

Add code
Mar 07, 2025
Viaarxiv icon

An Empirical Study on Eliciting and Improving R1-like Reasoning Models

Add code
Mar 06, 2025
Viaarxiv icon

Imitate, Explore, and Self-Improve: A Reproduction Report on Slow-thinking Reasoning Systems

Add code
Dec 12, 2024
Figure 1 for Imitate, Explore, and Self-Improve: A Reproduction Report on Slow-thinking Reasoning Systems
Figure 2 for Imitate, Explore, and Self-Improve: A Reproduction Report on Slow-thinking Reasoning Systems
Figure 3 for Imitate, Explore, and Self-Improve: A Reproduction Report on Slow-thinking Reasoning Systems
Figure 4 for Imitate, Explore, and Self-Improve: A Reproduction Report on Slow-thinking Reasoning Systems
Viaarxiv icon

Technical Report: Enhancing LLM Reasoning with Reward-guided Tree Search

Add code
Nov 18, 2024
Figure 1 for Technical Report: Enhancing LLM Reasoning with Reward-guided Tree Search
Figure 2 for Technical Report: Enhancing LLM Reasoning with Reward-guided Tree Search
Figure 3 for Technical Report: Enhancing LLM Reasoning with Reward-guided Tree Search
Figure 4 for Technical Report: Enhancing LLM Reasoning with Reward-guided Tree Search
Viaarxiv icon

Towards Effective and Efficient Continual Pre-training of Large Language Models

Add code
Jul 26, 2024
Figure 1 for Towards Effective and Efficient Continual Pre-training of Large Language Models
Figure 2 for Towards Effective and Efficient Continual Pre-training of Large Language Models
Figure 3 for Towards Effective and Efficient Continual Pre-training of Large Language Models
Figure 4 for Towards Effective and Efficient Continual Pre-training of Large Language Models
Viaarxiv icon