Picture for Junwei Liang

Junwei Liang

From Watch to Imagine: Steering Long-horizon Manipulation via Human Demonstration and Future Envisionment

Add code
Sep 26, 2025
Viaarxiv icon

End-to-End Humanoid Robot Safe and Comfortable Locomotion Policy

Add code
Aug 11, 2025
Viaarxiv icon

Stairway to Success: Zero-Shot Floor-Aware Object-Goal Navigation via LLM-Driven Coarse-to-Fine Exploration

Add code
May 29, 2025
Viaarxiv icon

Zero-Shot 3D Visual Grounding from Vision-Language Models

Add code
May 28, 2025
Figure 1 for Zero-Shot 3D Visual Grounding from Vision-Language Models
Figure 2 for Zero-Shot 3D Visual Grounding from Vision-Language Models
Figure 3 for Zero-Shot 3D Visual Grounding from Vision-Language Models
Figure 4 for Zero-Shot 3D Visual Grounding from Vision-Language Models
Viaarxiv icon

Omni-Perception: Omnidirectional Collision Avoidance for Legged Locomotion in Dynamic Environments

Add code
May 25, 2025
Viaarxiv icon

SD-OVON: A Semantics-aware Dataset and Benchmark Generation Pipeline for Open-Vocabulary Object Navigation in Dynamic Scenes

Add code
May 24, 2025
Viaarxiv icon

Exploring the Limits of Vision-Language-Action Manipulations in Cross-task Generalization

Add code
May 21, 2025
Viaarxiv icon

GLOVER++: Unleashing the Potential of Affordance Learning from Human Behaviors for Robotic Manipulation

Add code
May 17, 2025
Viaarxiv icon

GaussianProperty: Integrating Physical Properties to 3D Gaussians with LMMs

Add code
Dec 15, 2024
Viaarxiv icon

SeeGround: See and Ground for Zero-Shot Open-Vocabulary 3D Visual Grounding

Add code
Dec 05, 2024
Figure 1 for SeeGround: See and Ground for Zero-Shot Open-Vocabulary 3D Visual Grounding
Figure 2 for SeeGround: See and Ground for Zero-Shot Open-Vocabulary 3D Visual Grounding
Figure 3 for SeeGround: See and Ground for Zero-Shot Open-Vocabulary 3D Visual Grounding
Figure 4 for SeeGround: See and Ground for Zero-Shot Open-Vocabulary 3D Visual Grounding
Viaarxiv icon