Picture for Junwei Liang

Junwei Liang

NavThinker: Action-Conditioned World Models for Coupled Prediction and Planning in Social Navigation

Add code
Mar 16, 2026
Viaarxiv icon

FLUX: Accelerating Cross-Embodiment Generative Navigation Policies via Rectified Flow and Static-to-Dynamic Learning

Add code
Mar 13, 2026
Viaarxiv icon

DiT4DiT: Jointly Modeling Video Dynamics and Actions for Generalizable Robot Control

Add code
Mar 11, 2026
Viaarxiv icon

MeshMimic: Geometry-Aware Humanoid Motion Learning through 3D Scene Reconstruction

Add code
Feb 17, 2026
Viaarxiv icon

The RoboSense Challenge: Sense Anything, Navigate Anywhere, Adapt Across Platforms

Add code
Jan 08, 2026
Viaarxiv icon

Vision-Language-Action Models for Autonomous Driving: Past, Present, and Future

Add code
Dec 18, 2025
Viaarxiv icon

From Watch to Imagine: Steering Long-horizon Manipulation via Human Demonstration and Future Envisionment

Add code
Sep 26, 2025
Viaarxiv icon

End-to-End Humanoid Robot Safe and Comfortable Locomotion Policy

Add code
Aug 11, 2025
Viaarxiv icon

Stairway to Success: Zero-Shot Floor-Aware Object-Goal Navigation via LLM-Driven Coarse-to-Fine Exploration

Add code
May 29, 2025
Viaarxiv icon

Zero-Shot 3D Visual Grounding from Vision-Language Models

Add code
May 28, 2025
Figure 1 for Zero-Shot 3D Visual Grounding from Vision-Language Models
Figure 2 for Zero-Shot 3D Visual Grounding from Vision-Language Models
Figure 3 for Zero-Shot 3D Visual Grounding from Vision-Language Models
Figure 4 for Zero-Shot 3D Visual Grounding from Vision-Language Models
Viaarxiv icon