Picture for Qizhi Chen

Qizhi Chen

NL2Repo-Bench: Towards Long-Horizon Repository Generation Evaluation of Coding Agents

Add code
Dec 14, 2025
Viaarxiv icon

Openpi Comet: Competition Solution For 2025 BEHAVIOR Challenge

Add code
Dec 12, 2025
Viaarxiv icon

Are We Ready for RL in Text-to-3D Generation? A Progressive Investigation

Add code
Dec 11, 2025
Viaarxiv icon

NeuroPath: Neurobiology-Inspired Path Tracking and Reflection for Semantically Coherent Retrieval

Add code
Nov 18, 2025
Viaarxiv icon

F1: A Vision-Language-Action Model Bridging Understanding and Generation to Actions

Add code
Sep 09, 2025
Figure 1 for F1: A Vision-Language-Action Model Bridging Understanding and Generation to Actions
Figure 2 for F1: A Vision-Language-Action Model Bridging Understanding and Generation to Actions
Figure 3 for F1: A Vision-Language-Action Model Bridging Understanding and Generation to Actions
Figure 4 for F1: A Vision-Language-Action Model Bridging Understanding and Generation to Actions
Viaarxiv icon

EmbodiedOneVision: Interleaved Vision-Text-Action Pretraining for General Robot Control

Add code
Aug 28, 2025
Viaarxiv icon

GraphCogent: Overcoming LLMs' Working Memory Constraints via Multi-Agent Collaboration in Complex Graph Understanding

Add code
Aug 17, 2025
Figure 1 for GraphCogent: Overcoming LLMs' Working Memory Constraints via Multi-Agent Collaboration in Complex Graph Understanding
Figure 2 for GraphCogent: Overcoming LLMs' Working Memory Constraints via Multi-Agent Collaboration in Complex Graph Understanding
Figure 3 for GraphCogent: Overcoming LLMs' Working Memory Constraints via Multi-Agent Collaboration in Complex Graph Understanding
Figure 4 for GraphCogent: Overcoming LLMs' Working Memory Constraints via Multi-Agent Collaboration in Complex Graph Understanding
Viaarxiv icon

Hume: Introducing System-2 Thinking in Visual-Language-Action Model

Add code
May 29, 2025
Figure 1 for Hume: Introducing System-2 Thinking in Visual-Language-Action Model
Figure 2 for Hume: Introducing System-2 Thinking in Visual-Language-Action Model
Figure 3 for Hume: Introducing System-2 Thinking in Visual-Language-Action Model
Figure 4 for Hume: Introducing System-2 Thinking in Visual-Language-Action Model
Viaarxiv icon

AerialVG: A Challenging Benchmark for Aerial Visual Grounding by Exploring Positional Relations

Add code
Apr 10, 2025
Viaarxiv icon

Think Small, Act Big: Primitive Prompt Learning for Lifelong Robot Manipulation

Add code
Apr 01, 2025
Viaarxiv icon