Picture for Yun Ye

Yun Ye

GigaWorld-Policy: An Efficient Action-Centered World--Action Model

Add code
Mar 18, 2026
Viaarxiv icon

TacMamba: A Tactile History Compression Adapter Bridging Fast Reflexes and Slow VLA Reasoning

Add code
Mar 02, 2026
Viaarxiv icon

GigaBrain-0.5M*: a VLA That Learns From World Model-Based Reinforcement Learning

Add code
Feb 12, 2026
Viaarxiv icon

RealHD: A High-Quality Dataset for Robust Detection of State-of-the-Art AI-Generated Images

Add code
Feb 11, 2026
Viaarxiv icon

EMMA: Generalizing Real-World Robot Manipulation via Generative Visual Transfer

Add code
Sep 26, 2025
Figure 1 for EMMA: Generalizing Real-World Robot Manipulation via Generative Visual Transfer
Figure 2 for EMMA: Generalizing Real-World Robot Manipulation via Generative Visual Transfer
Figure 3 for EMMA: Generalizing Real-World Robot Manipulation via Generative Visual Transfer
Figure 4 for EMMA: Generalizing Real-World Robot Manipulation via Generative Visual Transfer
Viaarxiv icon

EgoDemoGen: Novel Egocentric Demonstration Generation Enables Viewpoint-Robust Manipulation

Add code
Sep 26, 2025
Figure 1 for EgoDemoGen: Novel Egocentric Demonstration Generation Enables Viewpoint-Robust Manipulation
Figure 2 for EgoDemoGen: Novel Egocentric Demonstration Generation Enables Viewpoint-Robust Manipulation
Figure 3 for EgoDemoGen: Novel Egocentric Demonstration Generation Enables Viewpoint-Robust Manipulation
Figure 4 for EgoDemoGen: Novel Egocentric Demonstration Generation Enables Viewpoint-Robust Manipulation
Viaarxiv icon

MimicDreamer: Aligning Human and Robot Demonstrations for Scalable VLA Training

Add code
Sep 26, 2025
Figure 1 for MimicDreamer: Aligning Human and Robot Demonstrations for Scalable VLA Training
Figure 2 for MimicDreamer: Aligning Human and Robot Demonstrations for Scalable VLA Training
Figure 3 for MimicDreamer: Aligning Human and Robot Demonstrations for Scalable VLA Training
Figure 4 for MimicDreamer: Aligning Human and Robot Demonstrations for Scalable VLA Training
Viaarxiv icon

Rethinking Lanes and Points in Complex Scenarios for Monocular 3D Lane Detection

Add code
Mar 08, 2025
Figure 1 for Rethinking Lanes and Points in Complex Scenarios for Monocular 3D Lane Detection
Figure 2 for Rethinking Lanes and Points in Complex Scenarios for Monocular 3D Lane Detection
Figure 3 for Rethinking Lanes and Points in Complex Scenarios for Monocular 3D Lane Detection
Figure 4 for Rethinking Lanes and Points in Complex Scenarios for Monocular 3D Lane Detection
Viaarxiv icon

GraphAD: Interaction Scene Graph for End-to-end Autonomous Driving

Add code
Apr 07, 2024
Figure 1 for GraphAD: Interaction Scene Graph for End-to-end Autonomous Driving
Figure 2 for GraphAD: Interaction Scene Graph for End-to-end Autonomous Driving
Figure 3 for GraphAD: Interaction Scene Graph for End-to-end Autonomous Driving
Figure 4 for GraphAD: Interaction Scene Graph for End-to-end Autonomous Driving
Viaarxiv icon

Detecting As Labeling: Rethinking LiDAR-camera Fusion in 3D Object Detection

Add code
Nov 13, 2023
Figure 1 for Detecting As Labeling: Rethinking LiDAR-camera Fusion in 3D Object Detection
Figure 2 for Detecting As Labeling: Rethinking LiDAR-camera Fusion in 3D Object Detection
Figure 3 for Detecting As Labeling: Rethinking LiDAR-camera Fusion in 3D Object Detection
Figure 4 for Detecting As Labeling: Rethinking LiDAR-camera Fusion in 3D Object Detection
Viaarxiv icon