Picture for Yifei Ming

Yifei Ming

LiveResearchBench: A Live Benchmark for User-Centric Deep Research in the Wild

Add code
Oct 16, 2025
Viaarxiv icon

Beyond Accuracy: Dissecting Mathematical Reasoning for LLMs Under Reinforcement Learning

Add code
Jun 05, 2025
Viaarxiv icon

MAS-ZERO: Designing Multi-Agent Systems with Zero Supervision

Add code
May 26, 2025
Viaarxiv icon

Meta-Design Matters: A Self-Design Multi-Agent System

Add code
May 21, 2025
Viaarxiv icon

A Survey of Frontiers in LLM Reasoning: Inference Scaling, Learning to Reason, and Agentic Systems

Add code
Apr 12, 2025
Figure 1 for A Survey of Frontiers in LLM Reasoning: Inference Scaling, Learning to Reason, and Agentic Systems
Figure 2 for A Survey of Frontiers in LLM Reasoning: Inference Scaling, Learning to Reason, and Agentic Systems
Figure 3 for A Survey of Frontiers in LLM Reasoning: Inference Scaling, Learning to Reason, and Agentic Systems
Figure 4 for A Survey of Frontiers in LLM Reasoning: Inference Scaling, Learning to Reason, and Agentic Systems
Viaarxiv icon

Adaptation of Large Language Models

Add code
Apr 04, 2025
Viaarxiv icon

Does Context Matter? ContextualJudgeBench for Evaluating LLM-based Judges in Contextual Settings

Add code
Mar 19, 2025
Viaarxiv icon

Demystifying Domain-adaptive Post-training for Financial LLMs

Add code
Jan 09, 2025
Figure 1 for Demystifying Domain-adaptive Post-training for Financial LLMs
Figure 2 for Demystifying Domain-adaptive Post-training for Financial LLMs
Figure 3 for Demystifying Domain-adaptive Post-training for Financial LLMs
Figure 4 for Demystifying Domain-adaptive Post-training for Financial LLMs
Viaarxiv icon

Discovering the Gems in Early Layers: Accelerating Long-Context LLMs with 1000x Input Token Reduction

Add code
Sep 25, 2024
Figure 1 for Discovering the Gems in Early Layers: Accelerating Long-Context LLMs with 1000x Input Token Reduction
Figure 2 for Discovering the Gems in Early Layers: Accelerating Long-Context LLMs with 1000x Input Token Reduction
Figure 3 for Discovering the Gems in Early Layers: Accelerating Long-Context LLMs with 1000x Input Token Reduction
Figure 4 for Discovering the Gems in Early Layers: Accelerating Long-Context LLMs with 1000x Input Token Reduction
Viaarxiv icon

SFR-RAG: Towards Contextually Faithful LLMs

Add code
Sep 16, 2024
Viaarxiv icon