Picture for Yongchao Chen

Yongchao Chen

Agentic Robot: A Brain-Inspired Framework for Vision-Language-Action Models in Embodied Agents

Add code
May 29, 2025
Viaarxiv icon

R1-Code-Interpreter: Training LLMs to Reason with Code via Supervised and Reinforcement Learning

Add code
May 27, 2025
Viaarxiv icon

Collision- and Reachability-Aware Multi-Robot Control with Grounded LLM Planners

Add code
May 26, 2025
Viaarxiv icon

Code-as-Symbolic-Planner: Foundation Model-Based Robot Planning via Symbolic Code Generation

Add code
Mar 03, 2025
Viaarxiv icon

CodeSteer: Symbolic-Augmented Language Models via Code/Text Guidance

Add code
Feb 04, 2025
Viaarxiv icon

Steering Large Language Models between Code Execution and Textual Reasoning

Add code
Oct 04, 2024
Figure 1 for Steering Large Language Models between Code Execution and Textual Reasoning
Figure 2 for Steering Large Language Models between Code Execution and Textual Reasoning
Figure 3 for Steering Large Language Models between Code Execution and Textual Reasoning
Figure 4 for Steering Large Language Models between Code Execution and Textual Reasoning
Viaarxiv icon

CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents

Add code
Jul 01, 2024
Figure 1 for CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents
Figure 2 for CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents
Figure 3 for CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents
Figure 4 for CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents
Viaarxiv icon

Large Language Models Can Plan Your Travels Rigorously with Formal Verification Tools

Add code
Apr 18, 2024
Viaarxiv icon

PRompt Optimization in Multi-Step Tasks (PROMST): Integrating Human Feedback and Preference Alignment

Add code
Feb 13, 2024
Viaarxiv icon

Physics-Enhanced Multi-fidelity Learning for Optical Surface Imprint

Add code
Nov 17, 2023
Viaarxiv icon