Picture for Xianpeng Lang

Xianpeng Lang

Efficient Token Pruning for LLaDA-V

Add code
Jan 28, 2026
Viaarxiv icon

PlannerRFT: Reinforcing Diffusion Planners through Closed-Loop and Sample-Efficient Fine-Tuning

Add code
Jan 19, 2026
Viaarxiv icon

SGDrive: Scene-to-Goal Hierarchical World Cognition for Autonomous Driving

Add code
Jan 12, 2026
Viaarxiv icon

DriveLiDAR4D: Sequential and Controllable LiDAR Scene Generation for Autonomous Driving

Add code
Nov 17, 2025
Viaarxiv icon

The Better You Learn, The Smarter You Prune: Towards Efficient Vision-language-action Models via Differentiable Token Pruning

Add code
Sep 16, 2025
Viaarxiv icon

DriveAgent-R1: Advancing VLM-based Autonomous Driving with Hybrid Thinking and Active Perception

Add code
Jul 28, 2025
Viaarxiv icon

DriveAction: A Benchmark for Exploring Human-like Driving Decisions in VLA Models

Add code
Jun 06, 2025
Viaarxiv icon

TokenFLEX: Unified VLM Training for Flexible Visual Tokens Inference

Add code
Apr 04, 2025
Figure 1 for TokenFLEX: Unified VLM Training for Flexible Visual Tokens Inference
Figure 2 for TokenFLEX: Unified VLM Training for Flexible Visual Tokens Inference
Figure 3 for TokenFLEX: Unified VLM Training for Flexible Visual Tokens Inference
Figure 4 for TokenFLEX: Unified VLM Training for Flexible Visual Tokens Inference
Viaarxiv icon

StyledStreets: Multi-style Street Simulator with Spatial and Temporal Consistency

Add code
Mar 27, 2025
Figure 1 for StyledStreets: Multi-style Street Simulator with Spatial and Temporal Consistency
Figure 2 for StyledStreets: Multi-style Street Simulator with Spatial and Temporal Consistency
Figure 3 for StyledStreets: Multi-style Street Simulator with Spatial and Temporal Consistency
Figure 4 for StyledStreets: Multi-style Street Simulator with Spatial and Temporal Consistency
Viaarxiv icon

Finetuning Generative Trajectory Model with Reinforcement Learning from Human Feedback

Add code
Mar 13, 2025
Figure 1 for Finetuning Generative Trajectory Model with Reinforcement Learning from Human Feedback
Figure 2 for Finetuning Generative Trajectory Model with Reinforcement Learning from Human Feedback
Figure 3 for Finetuning Generative Trajectory Model with Reinforcement Learning from Human Feedback
Figure 4 for Finetuning Generative Trajectory Model with Reinforcement Learning from Human Feedback
Viaarxiv icon