Picture for Xiaohao Xu

Xiaohao Xu

From Perfect to Noisy World Simulation: Customizable Embodied Multi-modal Perturbations for SLAM Robustness Benchmarking

Add code
Jun 24, 2024
Figure 1 for From Perfect to Noisy World Simulation: Customizable Embodied Multi-modal Perturbations for SLAM Robustness Benchmarking
Figure 2 for From Perfect to Noisy World Simulation: Customizable Embodied Multi-modal Perturbations for SLAM Robustness Benchmarking
Figure 3 for From Perfect to Noisy World Simulation: Customizable Embodied Multi-modal Perturbations for SLAM Robustness Benchmarking
Figure 4 for From Perfect to Noisy World Simulation: Customizable Embodied Multi-modal Perturbations for SLAM Robustness Benchmarking
Viaarxiv icon

Holmes-VAD: Towards Unbiased and Explainable Video Anomaly Detection via Multi-modal LLM

Add code
Jun 18, 2024
Viaarxiv icon

LogiCode: an LLM-Driven Framework for Logical Anomaly Detection

Add code
Jun 07, 2024
Viaarxiv icon

Self-supervised Pre-training for Transferable Multi-modal Perception

Add code
May 28, 2024
Figure 1 for Self-supervised Pre-training for Transferable Multi-modal Perception
Figure 2 for Self-supervised Pre-training for Transferable Multi-modal Perception
Figure 3 for Self-supervised Pre-training for Transferable Multi-modal Perception
Figure 4 for Self-supervised Pre-training for Transferable Multi-modal Perception
Viaarxiv icon

Optimizing LiDAR Placements for Robust Driving Perception in Adverse Conditions

Add code
Mar 25, 2024
Figure 1 for Optimizing LiDAR Placements for Robust Driving Perception in Adverse Conditions
Figure 2 for Optimizing LiDAR Placements for Robust Driving Perception in Adverse Conditions
Figure 3 for Optimizing LiDAR Placements for Robust Driving Perception in Adverse Conditions
Figure 4 for Optimizing LiDAR Placements for Robust Driving Perception in Adverse Conditions
Viaarxiv icon

Customizing Visual-Language Foundation Models for Multi-modal Anomaly Detection and Reasoning

Add code
Mar 17, 2024
Figure 1 for Customizing Visual-Language Foundation Models for Multi-modal Anomaly Detection and Reasoning
Figure 2 for Customizing Visual-Language Foundation Models for Multi-modal Anomaly Detection and Reasoning
Figure 3 for Customizing Visual-Language Foundation Models for Multi-modal Anomaly Detection and Reasoning
Figure 4 for Customizing Visual-Language Foundation Models for Multi-modal Anomaly Detection and Reasoning
Viaarxiv icon

GlanceVAD: Exploring Glance Supervision for Label-efficient Video Anomaly Detection

Add code
Mar 12, 2024
Figure 1 for GlanceVAD: Exploring Glance Supervision for Label-efficient Video Anomaly Detection
Figure 2 for GlanceVAD: Exploring Glance Supervision for Label-efficient Video Anomaly Detection
Figure 3 for GlanceVAD: Exploring Glance Supervision for Label-efficient Video Anomaly Detection
Figure 4 for GlanceVAD: Exploring Glance Supervision for Label-efficient Video Anomaly Detection
Viaarxiv icon

$\text{R}^2$-Bench: Benchmarking the Robustness of Referring Perception Models under Perturbations

Add code
Mar 07, 2024
Figure 1 for $\text{R}^2$-Bench: Benchmarking the Robustness of Referring Perception Models under Perturbations
Figure 2 for $\text{R}^2$-Bench: Benchmarking the Robustness of Referring Perception Models under Perturbations
Figure 3 for $\text{R}^2$-Bench: Benchmarking the Robustness of Referring Perception Models under Perturbations
Figure 4 for $\text{R}^2$-Bench: Benchmarking the Robustness of Referring Perception Models under Perturbations
Viaarxiv icon

Customizable Perturbation Synthesis for Robust SLAM Benchmarking

Add code
Feb 12, 2024
Viaarxiv icon

A Survey on Visual Anomaly Detection: Challenge, Approach, and Prospect

Add code
Jan 29, 2024
Viaarxiv icon