Alert button
Picture for Peter Schlicht

Peter Schlicht

Alert button

What should AI see? Using the Public's Opinion to Determine the Perception of an AI

Jun 09, 2022
Robin Chan, Radin Dardashti, Meike Osinski, Matthias Rottmann, Dominik Brüggemann, Cilia Rücker, Peter Schlicht, Fabian Hüger, Nikol Rummel, Hanno Gottschalk

Figure 1 for What should AI see? Using the Public's Opinion to Determine the Perception of an AI
Figure 2 for What should AI see? Using the Public's Opinion to Determine the Perception of an AI
Figure 3 for What should AI see? Using the Public's Opinion to Determine the Perception of an AI
Figure 4 for What should AI see? Using the Public's Opinion to Determine the Perception of an AI
Viaarxiv icon

Tailored Uncertainty Estimation for Deep Learning Systems

Apr 29, 2022
Joachim Sicking, Maram Akila, Jan David Schneider, Fabian Hüger, Peter Schlicht, Tim Wirtz, Stefan Wrobel

Figure 1 for Tailored Uncertainty Estimation for Deep Learning Systems
Figure 2 for Tailored Uncertainty Estimation for Deep Learning Systems
Figure 3 for Tailored Uncertainty Estimation for Deep Learning Systems
Figure 4 for Tailored Uncertainty Estimation for Deep Learning Systems
Viaarxiv icon

Validation of Simulation-Based Testing: Bypassing Domain Shift with Label-to-Image Synthesis

Jun 10, 2021
Julia Rosenzweig, Eduardo Brito, Hans-Ulrich Kobialka, Maram Akila, Nico M. Schmidt, Peter Schlicht, Jan David Schneider, Fabian Hüger, Matthias Rottmann, Sebastian Houben, Tim Wirtz

Figure 1 for Validation of Simulation-Based Testing: Bypassing Domain Shift with Label-to-Image Synthesis
Figure 2 for Validation of Simulation-Based Testing: Bypassing Domain Shift with Label-to-Image Synthesis
Figure 3 for Validation of Simulation-Based Testing: Bypassing Domain Shift with Label-to-Image Synthesis
Figure 4 for Validation of Simulation-Based Testing: Bypassing Domain Shift with Label-to-Image Synthesis
Viaarxiv icon

The Vulnerability of Semantic Segmentation Networks to Adversarial Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing

Jan 13, 2021
Andreas Bär, Jonas Löhdefink, Nikhil Kapoor, Serin J. Varghese, Fabian Hüger, Peter Schlicht, Tim Fingscheidt

Figure 1 for The Vulnerability of Semantic Segmentation Networks to Adversarial Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing
Figure 2 for The Vulnerability of Semantic Segmentation Networks to Adversarial Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing
Figure 3 for The Vulnerability of Semantic Segmentation Networks to Adversarial Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing
Figure 4 for The Vulnerability of Semantic Segmentation Networks to Adversarial Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing
Viaarxiv icon

Approaching Neural Network Uncertainty Realism

Jan 08, 2021
Joachim Sicking, Alexander Kister, Matthias Fahrland, Stefan Eickeler, Fabian Hüger, Stefan Rüping, Peter Schlicht, Tim Wirtz

Figure 1 for Approaching Neural Network Uncertainty Realism
Figure 2 for Approaching Neural Network Uncertainty Realism
Figure 3 for Approaching Neural Network Uncertainty Realism
Figure 4 for Approaching Neural Network Uncertainty Realism
Viaarxiv icon

Improving Video Instance Segmentation by Light-weight Temporal Uncertainty Estimates

Dec 14, 2020
Kira Maag, Matthias Rottmann, Fabian Hüger, Peter Schlicht, Hanno Gottschalk

Figure 1 for Improving Video Instance Segmentation by Light-weight Temporal Uncertainty Estimates
Figure 2 for Improving Video Instance Segmentation by Light-weight Temporal Uncertainty Estimates
Figure 3 for Improving Video Instance Segmentation by Light-weight Temporal Uncertainty Estimates
Figure 4 for Improving Video Instance Segmentation by Light-weight Temporal Uncertainty Estimates
Viaarxiv icon

From a Fourier-Domain Perspective on Adversarial Examples to a Wiener Filter Defense for Semantic Segmentation

Dec 02, 2020
Nikhil Kapoor, Andreas Bär, Serin Varghese, Jan David Schneider, Fabian Hüger, Peter Schlicht, Tim Fingscheidt

Figure 1 for From a Fourier-Domain Perspective on Adversarial Examples to a Wiener Filter Defense for Semantic Segmentation
Figure 2 for From a Fourier-Domain Perspective on Adversarial Examples to a Wiener Filter Defense for Semantic Segmentation
Figure 3 for From a Fourier-Domain Perspective on Adversarial Examples to a Wiener Filter Defense for Semantic Segmentation
Figure 4 for From a Fourier-Domain Perspective on Adversarial Examples to a Wiener Filter Defense for Semantic Segmentation
Viaarxiv icon

A Self-Supervised Feature Map Augmentation (FMA) Loss and Combined Augmentations Finetuning to Efficiently Improve the Robustness of CNNs

Dec 02, 2020
Nikhil Kapoor, Chun Yuan, Jonas Löhdefink, Roland Zimmermann, Serin Varghese, Fabian Hüger, Nico Schmidt, Peter Schlicht, Tim Fingscheidt

Figure 1 for A Self-Supervised Feature Map Augmentation (FMA) Loss and Combined Augmentations Finetuning to Efficiently Improve the Robustness of CNNs
Figure 2 for A Self-Supervised Feature Map Augmentation (FMA) Loss and Combined Augmentations Finetuning to Efficiently Improve the Robustness of CNNs
Figure 3 for A Self-Supervised Feature Map Augmentation (FMA) Loss and Combined Augmentations Finetuning to Efficiently Improve the Robustness of CNNs
Figure 4 for A Self-Supervised Feature Map Augmentation (FMA) Loss and Combined Augmentations Finetuning to Efficiently Improve the Robustness of CNNs
Viaarxiv icon

Risk Assessment for Machine Learning Models

Nov 09, 2020
Paul Schwerdtner, Florens Greßner, Nikhil Kapoor, Felix Assion, René Sass, Wiebke Günther, Fabian Hüger, Peter Schlicht

Figure 1 for Risk Assessment for Machine Learning Models
Figure 2 for Risk Assessment for Machine Learning Models
Figure 3 for Risk Assessment for Machine Learning Models
Figure 4 for Risk Assessment for Machine Learning Models
Viaarxiv icon

Self-Supervised Domain Mismatch Estimation for Autonomous Perception

Jun 15, 2020
Jonas Löhdefink, Justin Fehrling, Marvin Klingner, Fabian Hüger, Peter Schlicht, Nico M. Schmidt, Tim Fingscheidt

Figure 1 for Self-Supervised Domain Mismatch Estimation for Autonomous Perception
Figure 2 for Self-Supervised Domain Mismatch Estimation for Autonomous Perception
Figure 3 for Self-Supervised Domain Mismatch Estimation for Autonomous Perception
Figure 4 for Self-Supervised Domain Mismatch Estimation for Autonomous Perception
Viaarxiv icon