Picture for Yongchao Chen

Yongchao Chen

Learning Primitive Embodied World Models: Towards Scalable Robotic Learning

Add code
Aug 28, 2025
Viaarxiv icon

Agentic Robot: A Brain-Inspired Framework for Vision-Language-Action Models in Embodied Agents

Add code
May 29, 2025
Viaarxiv icon

R1-Code-Interpreter: Training LLMs to Reason with Code via Supervised and Reinforcement Learning

Add code
May 27, 2025
Viaarxiv icon

Collision- and Reachability-Aware Multi-Robot Control with Grounded LLM Planners

Add code
May 26, 2025
Viaarxiv icon

Code-as-Symbolic-Planner: Foundation Model-Based Robot Planning via Symbolic Code Generation

Add code
Mar 03, 2025
Figure 1 for Code-as-Symbolic-Planner: Foundation Model-Based Robot Planning via Symbolic Code Generation
Figure 2 for Code-as-Symbolic-Planner: Foundation Model-Based Robot Planning via Symbolic Code Generation
Figure 3 for Code-as-Symbolic-Planner: Foundation Model-Based Robot Planning via Symbolic Code Generation
Figure 4 for Code-as-Symbolic-Planner: Foundation Model-Based Robot Planning via Symbolic Code Generation
Viaarxiv icon

CodeSteer: Symbolic-Augmented Language Models via Code/Text Guidance

Add code
Feb 04, 2025
Figure 1 for CodeSteer: Symbolic-Augmented Language Models via Code/Text Guidance
Figure 2 for CodeSteer: Symbolic-Augmented Language Models via Code/Text Guidance
Figure 3 for CodeSteer: Symbolic-Augmented Language Models via Code/Text Guidance
Figure 4 for CodeSteer: Symbolic-Augmented Language Models via Code/Text Guidance
Viaarxiv icon

Steering Large Language Models between Code Execution and Textual Reasoning

Add code
Oct 04, 2024
Figure 1 for Steering Large Language Models between Code Execution and Textual Reasoning
Figure 2 for Steering Large Language Models between Code Execution and Textual Reasoning
Figure 3 for Steering Large Language Models between Code Execution and Textual Reasoning
Figure 4 for Steering Large Language Models between Code Execution and Textual Reasoning
Viaarxiv icon

CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents

Add code
Jul 01, 2024
Figure 1 for CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents
Figure 2 for CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents
Figure 3 for CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents
Figure 4 for CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents
Viaarxiv icon

Large Language Models Can Plan Your Travels Rigorously with Formal Verification Tools

Add code
Apr 18, 2024
Figure 1 for Large Language Models Can Plan Your Travels Rigorously with Formal Verification Tools
Figure 2 for Large Language Models Can Plan Your Travels Rigorously with Formal Verification Tools
Figure 3 for Large Language Models Can Plan Your Travels Rigorously with Formal Verification Tools
Figure 4 for Large Language Models Can Plan Your Travels Rigorously with Formal Verification Tools
Viaarxiv icon

PRompt Optimization in Multi-Step Tasks (PROMST): Integrating Human Feedback and Preference Alignment

Add code
Feb 13, 2024
Viaarxiv icon