Alert button
Picture for Gesina Schwalbe

Gesina Schwalbe

Alert button

GCPV: Guided Concept Projection Vectors for the Explainable Inspection of CNN Feature Spaces

Nov 24, 2023
Georgii Mikriukov, Gesina Schwalbe, Christian Hellert, Korinna Bade

Figure 1 for GCPV: Guided Concept Projection Vectors for the Explainable Inspection of CNN Feature Spaces
Figure 2 for GCPV: Guided Concept Projection Vectors for the Explainable Inspection of CNN Feature Spaces
Figure 3 for GCPV: Guided Concept Projection Vectors for the Explainable Inspection of CNN Feature Spaces
Figure 4 for GCPV: Guided Concept Projection Vectors for the Explainable Inspection of CNN Feature Spaces
Viaarxiv icon

Have We Ever Encountered This Before? Retrieving Out-of-Distribution Road Obstacles from Driving Scenes

Sep 08, 2023
Youssef Shoeb, Robin Chan, Gesina Schwalbe, Azarm Nowzard, Fatma Güney, Hanno Gottschalk

Figure 1 for Have We Ever Encountered This Before? Retrieving Out-of-Distribution Road Obstacles from Driving Scenes
Figure 2 for Have We Ever Encountered This Before? Retrieving Out-of-Distribution Road Obstacles from Driving Scenes
Figure 3 for Have We Ever Encountered This Before? Retrieving Out-of-Distribution Road Obstacles from Driving Scenes
Figure 4 for Have We Ever Encountered This Before? Retrieving Out-of-Distribution Road Obstacles from Driving Scenes
Viaarxiv icon

Quantified Semantic Comparison of Convolutional Neural Networks

Apr 30, 2023
Georgii Mikriukov, Gesina Schwalbe, Christian Hellert, Korinna Bade

Figure 1 for Quantified Semantic Comparison of Convolutional Neural Networks
Figure 2 for Quantified Semantic Comparison of Convolutional Neural Networks
Figure 3 for Quantified Semantic Comparison of Convolutional Neural Networks
Figure 4 for Quantified Semantic Comparison of Convolutional Neural Networks
Viaarxiv icon

Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability

Apr 28, 2023
Georgii Mikriukov, Gesina Schwalbe, Christian Hellert, Korinna Bade

Figure 1 for Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability
Figure 2 for Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability
Figure 3 for Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability
Figure 4 for Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability
Viaarxiv icon

Concept Embedding Analysis: A Review

Mar 25, 2022
Gesina Schwalbe

Figure 1 for Concept Embedding Analysis: A Review
Figure 2 for Concept Embedding Analysis: A Review
Figure 3 for Concept Embedding Analysis: A Review
Figure 4 for Concept Embedding Analysis: A Review
Viaarxiv icon

Concept Embeddings for Fuzzy Logic Verification of Deep Neural Networks in Perception Tasks

Jan 03, 2022
Gesina Schwalbe, Christian Wirth, Ute Schmid

Figure 1 for Concept Embeddings for Fuzzy Logic Verification of Deep Neural Networks in Perception Tasks
Figure 2 for Concept Embeddings for Fuzzy Logic Verification of Deep Neural Networks in Perception Tasks
Figure 3 for Concept Embeddings for Fuzzy Logic Verification of Deep Neural Networks in Perception Tasks
Figure 4 for Concept Embeddings for Fuzzy Logic Verification of Deep Neural Networks in Perception Tasks
Viaarxiv icon

Expressive Explanations of DNNs by Combining Concept Analysis with ILP

May 16, 2021
Johannes Rabold, Gesina Schwalbe, Ute Schmid

Figure 1 for Expressive Explanations of DNNs by Combining Concept Analysis with ILP
Figure 2 for Expressive Explanations of DNNs by Combining Concept Analysis with ILP
Figure 3 for Expressive Explanations of DNNs by Combining Concept Analysis with ILP
Figure 4 for Expressive Explanations of DNNs by Combining Concept Analysis with ILP
Viaarxiv icon

XAI Method Properties: A (Meta-)study

May 15, 2021
Gesina Schwalbe, Bettina Finzel

Figure 1 for XAI Method Properties: A (Meta-)study
Figure 2 for XAI Method Properties: A (Meta-)study
Figure 3 for XAI Method Properties: A (Meta-)study
Figure 4 for XAI Method Properties: A (Meta-)study
Viaarxiv icon

Verification of Size Invariance in DNN Activations using Concept Embeddings

May 14, 2021
Gesina Schwalbe

Figure 1 for Verification of Size Invariance in DNN Activations using Concept Embeddings
Figure 2 for Verification of Size Invariance in DNN Activations using Concept Embeddings
Figure 3 for Verification of Size Invariance in DNN Activations using Concept Embeddings
Figure 4 for Verification of Size Invariance in DNN Activations using Concept Embeddings
Viaarxiv icon

Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety

Apr 29, 2021
Sebastian Houben, Stephanie Abrecht, Maram Akila, Andreas Bär, Felix Brockherde, Patrick Feifel, Tim Fingscheidt, Sujan Sai Gannamaneni, Seyed Eghbal Ghobadi, Ahmed Hammam, Anselm Haselhoff, Felix Hauser, Christian Heinzemann, Marco Hoffmann, Nikhil Kapoor, Falk Kappel, Marvin Klingner, Jan Kronenberger, Fabian Küppers, Jonas Löhdefink, Michael Mlynarski, Michael Mock, Firas Mualla, Svetlana Pavlitskaya, Maximilian Poretschkin, Alexander Pohl, Varun Ravi-Kumar, Julia Rosenzweig, Matthias Rottmann, Stefan Rüping, Timo Sämann, Jan David Schneider, Elena Schulz, Gesina Schwalbe, Joachim Sicking, Toshika Srivastava, Serin Varghese, Michael Weber, Sebastian Wirkert, Tim Wirtz, Matthias Woehrle

Viaarxiv icon