Picture for Zhen Tan

Zhen Tan

Metacognitive Self-Correction for Multi-Agent System via Prototype-Guided Next-Execution Reconstruction

Add code
Oct 16, 2025
Viaarxiv icon

Multi-Agent Debate for LLM Judges with Adaptive Stability Detection

Add code
Oct 14, 2025
Figure 1 for Multi-Agent Debate for LLM Judges with Adaptive Stability Detection
Figure 2 for Multi-Agent Debate for LLM Judges with Adaptive Stability Detection
Figure 3 for Multi-Agent Debate for LLM Judges with Adaptive Stability Detection
Figure 4 for Multi-Agent Debate for LLM Judges with Adaptive Stability Detection
Viaarxiv icon

Learning from Diverse Reasoning Paths with Routing and Collaboration

Add code
Aug 23, 2025
Figure 1 for Learning from Diverse Reasoning Paths with Routing and Collaboration
Figure 2 for Learning from Diverse Reasoning Paths with Routing and Collaboration
Figure 3 for Learning from Diverse Reasoning Paths with Routing and Collaboration
Figure 4 for Learning from Diverse Reasoning Paths with Routing and Collaboration
Viaarxiv icon

Are Today's LLMs Ready to Explain Well-Being Concepts?

Add code
Aug 06, 2025
Viaarxiv icon

Transferring Expert Cognitive Models to Social Robots via Agentic Concept Bottleneck Models

Add code
Aug 06, 2025
Viaarxiv icon

Model Editing as a Double-Edged Sword: Steering Agent Ethical Behavior Toward Beneficence or Harm

Add code
Jun 25, 2025
Viaarxiv icon

EQA-RM: A Generative Embodied Reward Model with Test-time Scaling

Add code
Jun 12, 2025
Viaarxiv icon

IndustryEQA: Pushing the Frontiers of Embodied Question Answering in Industrial Scenarios

Add code
May 27, 2025
Viaarxiv icon

DOGe: Defensive Output Generation for LLM Protection Against Knowledge Distillation

Add code
May 26, 2025
Figure 1 for DOGe: Defensive Output Generation for LLM Protection Against Knowledge Distillation
Figure 2 for DOGe: Defensive Output Generation for LLM Protection Against Knowledge Distillation
Figure 3 for DOGe: Defensive Output Generation for LLM Protection Against Knowledge Distillation
Figure 4 for DOGe: Defensive Output Generation for LLM Protection Against Knowledge Distillation
Viaarxiv icon

The Quest for Efficient Reasoning: A Data-Centric Benchmark to CoT Distillation

Add code
May 24, 2025
Viaarxiv icon