Picture for Michelle Karg

Michelle Karg

Streamlining the Development of Active Learning Methods in Real-World Object Detection

Add code
Aug 27, 2025
Viaarxiv icon

Prediction Accuracy & Reliability: Classification and Object Localization under Distribution Shift

Add code
Sep 05, 2024
Viaarxiv icon

Cost-Sensitive Uncertainty-Based Failure Recognition for Object Detection

Add code
Apr 26, 2024
Figure 1 for Cost-Sensitive Uncertainty-Based Failure Recognition for Object Detection
Figure 2 for Cost-Sensitive Uncertainty-Based Failure Recognition for Object Detection
Figure 3 for Cost-Sensitive Uncertainty-Based Failure Recognition for Object Detection
Figure 4 for Cost-Sensitive Uncertainty-Based Failure Recognition for Object Detection
Viaarxiv icon

Overcoming the Limitations of Localization Uncertainty: Efficient & Exact Non-Linear Post-Processing and Calibration

Add code
Jun 15, 2023
Viaarxiv icon

Residual Error: a New Performance Measure for Adversarial Robustness

Add code
Jun 18, 2021
Figure 1 for Residual Error: a New Performance Measure for Adversarial Robustness
Figure 2 for Residual Error: a New Performance Measure for Adversarial Robustness
Figure 3 for Residual Error: a New Performance Measure for Adversarial Robustness
Figure 4 for Residual Error: a New Performance Measure for Adversarial Robustness
Viaarxiv icon

Vulnerability Under Adversarial Machine Learning: Bias or Variance?

Add code
Aug 01, 2020
Figure 1 for Vulnerability Under Adversarial Machine Learning: Bias or Variance?
Figure 2 for Vulnerability Under Adversarial Machine Learning: Bias or Variance?
Figure 3 for Vulnerability Under Adversarial Machine Learning: Bias or Variance?
Figure 4 for Vulnerability Under Adversarial Machine Learning: Bias or Variance?
Viaarxiv icon

Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness

Add code
Mar 03, 2020
Figure 1 for Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Figure 2 for Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Figure 3 for Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Figure 4 for Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Viaarxiv icon

StressedNets: Efficient Feature Representations via Stress-induced Evolutionary Synthesis of Deep Neural Networks

Add code
Jan 16, 2018
Figure 1 for StressedNets: Efficient Feature Representations via Stress-induced Evolutionary Synthesis of Deep Neural Networks
Figure 2 for StressedNets: Efficient Feature Representations via Stress-induced Evolutionary Synthesis of Deep Neural Networks
Figure 3 for StressedNets: Efficient Feature Representations via Stress-induced Evolutionary Synthesis of Deep Neural Networks
Figure 4 for StressedNets: Efficient Feature Representations via Stress-induced Evolutionary Synthesis of Deep Neural Networks
Viaarxiv icon