Picture for Haichao Liu

Haichao Liu

The Hong Kong University of Science and Technology

RoCo Challenge at AAAI 2026: Benchmarking Robotic Collaborative Manipulation for Assembly Towards Industrial Automation

Add code
Mar 16, 2026
Viaarxiv icon

MoE-ACT: Scaling Multi-Task Bimanual Manipulation with Sparse Language-Conditioned Mixture-of-Experts Transformers

Add code
Mar 16, 2026
Viaarxiv icon

SpecFuse: A Spectral-Temporal Fusion Predictive Control Framework for UAV Landing on Oscillating Marine Platforms

Add code
Feb 17, 2026
Viaarxiv icon

UniManip: General-Purpose Zero-Shot Robotic Manipulation with Agentic Operational Graph

Add code
Feb 13, 2026
Viaarxiv icon

An Intention-driven Lane Change Framework Considering Heterogeneous Dynamic Cooperation in Mixed-traffic Environment

Add code
Sep 26, 2025
Figure 1 for An Intention-driven Lane Change Framework Considering Heterogeneous Dynamic Cooperation in Mixed-traffic Environment
Figure 2 for An Intention-driven Lane Change Framework Considering Heterogeneous Dynamic Cooperation in Mixed-traffic Environment
Figure 3 for An Intention-driven Lane Change Framework Considering Heterogeneous Dynamic Cooperation in Mixed-traffic Environment
Figure 4 for An Intention-driven Lane Change Framework Considering Heterogeneous Dynamic Cooperation in Mixed-traffic Environment
Viaarxiv icon

RoboDexVLM: Visual Language Model-Enabled Task Planning and Motion Control for Dexterous Robot Manipulation

Add code
Mar 03, 2025
Figure 1 for RoboDexVLM: Visual Language Model-Enabled Task Planning and Motion Control for Dexterous Robot Manipulation
Figure 2 for RoboDexVLM: Visual Language Model-Enabled Task Planning and Motion Control for Dexterous Robot Manipulation
Figure 3 for RoboDexVLM: Visual Language Model-Enabled Task Planning and Motion Control for Dexterous Robot Manipulation
Figure 4 for RoboDexVLM: Visual Language Model-Enabled Task Planning and Motion Control for Dexterous Robot Manipulation
Viaarxiv icon

VLM-E2E: Enhancing End-to-End Autonomous Driving with Multimodal Driver Attention Fusion

Add code
Feb 25, 2025
Figure 1 for VLM-E2E: Enhancing End-to-End Autonomous Driving with Multimodal Driver Attention Fusion
Figure 2 for VLM-E2E: Enhancing End-to-End Autonomous Driving with Multimodal Driver Attention Fusion
Figure 3 for VLM-E2E: Enhancing End-to-End Autonomous Driving with Multimodal Driver Attention Fusion
Figure 4 for VLM-E2E: Enhancing End-to-End Autonomous Driving with Multimodal Driver Attention Fusion
Viaarxiv icon

CoDriveVLM: VLM-Enhanced Urban Cooperative Dispatching and Motion Planning for Future Autonomous Mobility on Demand Systems

Add code
Jan 10, 2025
Figure 1 for CoDriveVLM: VLM-Enhanced Urban Cooperative Dispatching and Motion Planning for Future Autonomous Mobility on Demand Systems
Figure 2 for CoDriveVLM: VLM-Enhanced Urban Cooperative Dispatching and Motion Planning for Future Autonomous Mobility on Demand Systems
Figure 3 for CoDriveVLM: VLM-Enhanced Urban Cooperative Dispatching and Motion Planning for Future Autonomous Mobility on Demand Systems
Figure 4 for CoDriveVLM: VLM-Enhanced Urban Cooperative Dispatching and Motion Planning for Future Autonomous Mobility on Demand Systems
Viaarxiv icon

UDMC: Unified Decision-Making and Control Framework for Urban Autonomous Driving with Motion Prediction of Traffic Participants

Add code
Jan 05, 2025
Figure 1 for UDMC: Unified Decision-Making and Control Framework for Urban Autonomous Driving with Motion Prediction of Traffic Participants
Figure 2 for UDMC: Unified Decision-Making and Control Framework for Urban Autonomous Driving with Motion Prediction of Traffic Participants
Figure 3 for UDMC: Unified Decision-Making and Control Framework for Urban Autonomous Driving with Motion Prediction of Traffic Participants
Figure 4 for UDMC: Unified Decision-Making and Control Framework for Urban Autonomous Driving with Motion Prediction of Traffic Participants
Viaarxiv icon

CALMM-Drive: Confidence-Aware Autonomous Driving with Large Multimodal Model

Add code
Dec 05, 2024
Figure 1 for CALMM-Drive: Confidence-Aware Autonomous Driving with Large Multimodal Model
Figure 2 for CALMM-Drive: Confidence-Aware Autonomous Driving with Large Multimodal Model
Figure 3 for CALMM-Drive: Confidence-Aware Autonomous Driving with Large Multimodal Model
Figure 4 for CALMM-Drive: Confidence-Aware Autonomous Driving with Large Multimodal Model
Viaarxiv icon