Picture for Zhenjiang Mao

Zhenjiang Mao

Temporalizing Confidence: Evaluation of Chain-of-Thought Reasoning with Signal Temporal Logic

Add code
Jun 09, 2025
Viaarxiv icon

Generalizable Image Repair for Robust Visual Autonomous Racing

Add code
Mar 07, 2025
Figure 1 for Generalizable Image Repair for Robust Visual Autonomous Racing
Figure 2 for Generalizable Image Repair for Robust Visual Autonomous Racing
Figure 3 for Generalizable Image Repair for Robust Visual Autonomous Racing
Figure 4 for Generalizable Image Repair for Robust Visual Autonomous Racing
Viaarxiv icon

Four Principles for Physically Interpretable World Models

Add code
Mar 04, 2025
Figure 1 for Four Principles for Physically Interpretable World Models
Figure 2 for Four Principles for Physically Interpretable World Models
Figure 3 for Four Principles for Physically Interpretable World Models
Figure 4 for Four Principles for Physically Interpretable World Models
Viaarxiv icon

Towards Physically Interpretable World Models: Meaningful Weakly Supervised Representations for Visual Trajectory Prediction

Add code
Dec 17, 2024
Viaarxiv icon

Language-Enhanced Latent Representations for Out-of-Distribution Detection in Autonomous Driving

Add code
May 02, 2024
Figure 1 for Language-Enhanced Latent Representations for Out-of-Distribution Detection in Autonomous Driving
Figure 2 for Language-Enhanced Latent Representations for Out-of-Distribution Detection in Autonomous Driving
Figure 3 for Language-Enhanced Latent Representations for Out-of-Distribution Detection in Autonomous Driving
Viaarxiv icon

Zero-shot Safety Prediction for Autonomous Robots with Foundation World Models

Add code
Apr 02, 2024
Figure 1 for Zero-shot Safety Prediction for Autonomous Robots with Foundation World Models
Figure 2 for Zero-shot Safety Prediction for Autonomous Robots with Foundation World Models
Figure 3 for Zero-shot Safety Prediction for Autonomous Robots with Foundation World Models
Figure 4 for Zero-shot Safety Prediction for Autonomous Robots with Foundation World Models
Viaarxiv icon

How Safe Am I Given What I See? Calibrated Prediction of Safety Chances for Image-Controlled Autonomy

Add code
Aug 23, 2023
Figure 1 for How Safe Am I Given What I See? Calibrated Prediction of Safety Chances for Image-Controlled Autonomy
Figure 2 for How Safe Am I Given What I See? Calibrated Prediction of Safety Chances for Image-Controlled Autonomy
Figure 3 for How Safe Am I Given What I See? Calibrated Prediction of Safety Chances for Image-Controlled Autonomy
Figure 4 for How Safe Am I Given What I See? Calibrated Prediction of Safety Chances for Image-Controlled Autonomy
Viaarxiv icon