Picture for Zehui Chen

Zehui Chen

VimRAG: Navigating Massive Visual Context in Retrieval-Augmented Generation via Multimodal Memory Graph

Add code
Feb 13, 2026
Viaarxiv icon

ADORA: Training Reasoning Models with Dynamic Advantage Estimation on Reinforcement Learning

Add code
Feb 10, 2026
Viaarxiv icon

Internalizing Meta-Experience into Memory for Guided Reinforcement Learning in Large Language Models

Add code
Feb 10, 2026
Viaarxiv icon

Vision-DeepResearch Benchmark: Rethinking Visual and Textual Search for Multimodal Large Language Models

Add code
Feb 02, 2026
Viaarxiv icon

Vision-DeepResearch: Incentivizing DeepResearch Capability in Multimodal Large Language Models

Add code
Jan 29, 2026
Viaarxiv icon

UniCorn: Towards Self-Improving Unified Multimodal Models through Self-Generated Supervision

Add code
Jan 08, 2026
Viaarxiv icon

AgentGym-RL: Training LLM Agents for Long-Horizon Decision Making through Multi-Turn Reinforcement Learning

Add code
Sep 10, 2025
Viaarxiv icon

VRAG-RL: Empower Vision-Perception-Based RAG for Visually Rich Information Understanding via Iterative Reasoning with Reinforcement Learning

Add code
May 28, 2025
Viaarxiv icon

VCR-Bench: A Comprehensive Evaluation Framework for Video Chain-of-Thought Reasoning

Add code
Apr 10, 2025
Viaarxiv icon

ViDoRAG: Visual Document Retrieval-Augmented Generation via Dynamic Iterative Reasoning Agents

Add code
Feb 25, 2025
Figure 1 for ViDoRAG: Visual Document Retrieval-Augmented Generation via Dynamic Iterative Reasoning Agents
Figure 2 for ViDoRAG: Visual Document Retrieval-Augmented Generation via Dynamic Iterative Reasoning Agents
Figure 3 for ViDoRAG: Visual Document Retrieval-Augmented Generation via Dynamic Iterative Reasoning Agents
Figure 4 for ViDoRAG: Visual Document Retrieval-Augmented Generation via Dynamic Iterative Reasoning Agents
Viaarxiv icon