Picture for Peng Jia

Peng Jia

Generalizable Engagement Estimation in Conversation via Domain Prompting and Parallel Attention

Add code
Aug 20, 2025
Viaarxiv icon

DriveAction: A Benchmark for Exploring Human-like Driving Decisions in VLA Models

Add code
Jun 06, 2025
Viaarxiv icon

GeoDrive: 3D Geometry-Informed Driving World Model with Precise Action Control

Add code
May 29, 2025
Viaarxiv icon

TransDiffuser: End-to-end Trajectory Generation with Decorrelated Multi-modal Representation for Autonomous Driving

Add code
May 14, 2025
Viaarxiv icon

PosePilot: Steering Camera Pose for Generative World Models with Self-supervised Depth

Add code
May 03, 2025
Viaarxiv icon

Adaptive Detection of Fast Moving Celestial Objects Using a Mixture of Experts and Physical-Inspired Neural Network

Add code
Apr 10, 2025
Viaarxiv icon

TokenFLEX: Unified VLM Training for Flexible Visual Tokens Inference

Add code
Apr 04, 2025
Figure 1 for TokenFLEX: Unified VLM Training for Flexible Visual Tokens Inference
Figure 2 for TokenFLEX: Unified VLM Training for Flexible Visual Tokens Inference
Figure 3 for TokenFLEX: Unified VLM Training for Flexible Visual Tokens Inference
Figure 4 for TokenFLEX: Unified VLM Training for Flexible Visual Tokens Inference
Viaarxiv icon

VISTA: Unsupervised 2D Temporal Dependency Representations for Time Series Anomaly Detection

Add code
Apr 03, 2025
Viaarxiv icon

StyledStreets: Multi-style Street Simulator with Spatial and Temporal Consistency

Add code
Mar 27, 2025
Figure 1 for StyledStreets: Multi-style Street Simulator with Spatial and Temporal Consistency
Figure 2 for StyledStreets: Multi-style Street Simulator with Spatial and Temporal Consistency
Figure 3 for StyledStreets: Multi-style Street Simulator with Spatial and Temporal Consistency
Figure 4 for StyledStreets: Multi-style Street Simulator with Spatial and Temporal Consistency
Viaarxiv icon

Finetuning Generative Trajectory Model with Reinforcement Learning from Human Feedback

Add code
Mar 13, 2025
Figure 1 for Finetuning Generative Trajectory Model with Reinforcement Learning from Human Feedback
Figure 2 for Finetuning Generative Trajectory Model with Reinforcement Learning from Human Feedback
Figure 3 for Finetuning Generative Trajectory Model with Reinforcement Learning from Human Feedback
Figure 4 for Finetuning Generative Trajectory Model with Reinforcement Learning from Human Feedback
Viaarxiv icon