Picture for Zhongpu Xia

Zhongpu Xia

Learning Rollout from Sampling:An R1-Style Tokenized Traffic Simulation Model

Add code
Mar 26, 2026
Viaarxiv icon

DreamerAD: Efficient Reinforcement Learning via Latent World Model for Autonomous Driving

Add code
Mar 25, 2026
Viaarxiv icon

PerlAD: Towards Enhanced Closed-loop End-to-end Autonomous Driving with Pseudo-simulation-based Reinforcement Learning

Add code
Mar 16, 2026
Viaarxiv icon

MeanFuser: Fast One-Step Multi-Modal Trajectory Generation and Adaptive Reconstruction via MeanFlow for End-to-End Autonomous Driving

Add code
Feb 23, 2026
Viaarxiv icon

WorldRFT: Latent World Model Planning with Reinforcement Fine-Tuning for Autonomous Driving

Add code
Dec 22, 2025
Viaarxiv icon

TakeAD: Preference-based Post-optimization for End-to-end Autonomous Driving with Expert Takeover Data

Add code
Dec 22, 2025
Figure 1 for TakeAD: Preference-based Post-optimization for End-to-end Autonomous Driving with Expert Takeover Data
Figure 2 for TakeAD: Preference-based Post-optimization for End-to-end Autonomous Driving with Expert Takeover Data
Figure 3 for TakeAD: Preference-based Post-optimization for End-to-end Autonomous Driving with Expert Takeover Data
Figure 4 for TakeAD: Preference-based Post-optimization for End-to-end Autonomous Driving with Expert Takeover Data
Viaarxiv icon

TransDiffuser: End-to-end Trajectory Generation with Decorrelated Multi-modal Representation for Autonomous Driving

Add code
May 14, 2025
Viaarxiv icon

TokenFLEX: Unified VLM Training for Flexible Visual Tokens Inference

Add code
Apr 04, 2025
Figure 1 for TokenFLEX: Unified VLM Training for Flexible Visual Tokens Inference
Figure 2 for TokenFLEX: Unified VLM Training for Flexible Visual Tokens Inference
Figure 3 for TokenFLEX: Unified VLM Training for Flexible Visual Tokens Inference
Figure 4 for TokenFLEX: Unified VLM Training for Flexible Visual Tokens Inference
Viaarxiv icon

Finetuning Generative Trajectory Model with Reinforcement Learning from Human Feedback

Add code
Mar 13, 2025
Figure 1 for Finetuning Generative Trajectory Model with Reinforcement Learning from Human Feedback
Figure 2 for Finetuning Generative Trajectory Model with Reinforcement Learning from Human Feedback
Figure 3 for Finetuning Generative Trajectory Model with Reinforcement Learning from Human Feedback
Figure 4 for Finetuning Generative Trajectory Model with Reinforcement Learning from Human Feedback
Viaarxiv icon

Preliminary Investigation into Data Scaling Laws for Imitation Learning-Based End-to-End Autonomous Driving

Add code
Dec 03, 2024
Figure 1 for Preliminary Investigation into Data Scaling Laws for Imitation Learning-Based End-to-End Autonomous Driving
Figure 2 for Preliminary Investigation into Data Scaling Laws for Imitation Learning-Based End-to-End Autonomous Driving
Figure 3 for Preliminary Investigation into Data Scaling Laws for Imitation Learning-Based End-to-End Autonomous Driving
Figure 4 for Preliminary Investigation into Data Scaling Laws for Imitation Learning-Based End-to-End Autonomous Driving
Viaarxiv icon