Alert button
Picture for Sebastian Lapuschkin

Sebastian Lapuschkin

Alert button

Human-Centered Evaluation of XAI Methods

Add code
Bookmark button
Alert button
Oct 11, 2023
Karam Dawoud, Wojciech Samek, Sebastian Lapuschkin, Sebastian Bosse

Figure 1 for Human-Centered Evaluation of XAI Methods
Figure 2 for Human-Centered Evaluation of XAI Methods
Figure 3 for Human-Centered Evaluation of XAI Methods
Figure 4 for Human-Centered Evaluation of XAI Methods
Viaarxiv icon

Layer-wise Feedback Propagation

Add code
Bookmark button
Alert button
Aug 23, 2023
Leander Weber, Jim Berend, Alexander Binder, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin

Figure 1 for Layer-wise Feedback Propagation
Figure 2 for Layer-wise Feedback Propagation
Figure 3 for Layer-wise Feedback Propagation
Figure 4 for Layer-wise Feedback Propagation
Viaarxiv icon

From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space

Add code
Bookmark button
Alert button
Aug 18, 2023
Maximilian Dreyer, Frederik Pahde, Christopher J. Anders, Wojciech Samek, Sebastian Lapuschkin

Figure 1 for From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space
Figure 2 for From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space
Figure 3 for From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space
Figure 4 for From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space
Viaarxiv icon

XAI-based Comparison of Input Representations for Audio Event Classification

Add code
Bookmark button
Alert button
Apr 27, 2023
Annika Frommholz, Fabian Seipel, Sebastian Lapuschkin, Wojciech Samek, Johanna Vielhaben

Figure 1 for XAI-based Comparison of Input Representations for Audio Event Classification
Figure 2 for XAI-based Comparison of Input Representations for Audio Event Classification
Figure 3 for XAI-based Comparison of Input Representations for Audio Event Classification
Figure 4 for XAI-based Comparison of Input Representations for Audio Event Classification
Viaarxiv icon

Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models

Add code
Bookmark button
Alert button
Apr 12, 2023
Daniel G. Krakowczyk, Paul Prasse, David R. Reich, Sebastian Lapuschkin, Tobias Scheffer, Lena A. Jäger

Figure 1 for Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models
Figure 2 for Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models
Figure 3 for Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models
Figure 4 for Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models
Viaarxiv icon

Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models

Add code
Bookmark button
Alert button
Mar 27, 2023
Frederik Pahde, Maximilian Dreyer, Wojciech Samek, Sebastian Lapuschkin

Figure 1 for Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models
Figure 2 for Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models
Figure 3 for Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models
Figure 4 for Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models
Viaarxiv icon

Explainable AI for Time Series via Virtual Inspection Layers

Add code
Bookmark button
Alert button
Mar 11, 2023
Johanna Vielhaben, Sebastian Lapuschkin, Grégoire Montavon, Wojciech Samek

Figure 1 for Explainable AI for Time Series via Virtual Inspection Layers
Figure 2 for Explainable AI for Time Series via Virtual Inspection Layers
Figure 3 for Explainable AI for Time Series via Virtual Inspection Layers
Figure 4 for Explainable AI for Time Series via Virtual Inspection Layers
Viaarxiv icon

The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus

Add code
Bookmark button
Alert button
Feb 14, 2023
Anna Hedström, Philine Bommer, Kristoffer K. Wickstrøm, Wojciech Samek, Sebastian Lapuschkin, Marina M. -C. Höhne

Figure 1 for The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Figure 2 for The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Figure 3 for The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Figure 4 for The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Viaarxiv icon

Optimizing Explanations by Network Canonization and Hyperparameter Search

Add code
Bookmark button
Alert button
Nov 30, 2022
Frederik Pahde, Galip Ümit Yolcu, Alexander Binder, Wojciech Samek, Sebastian Lapuschkin

Figure 1 for Optimizing Explanations by Network Canonization and Hyperparameter Search
Figure 2 for Optimizing Explanations by Network Canonization and Hyperparameter Search
Figure 3 for Optimizing Explanations by Network Canonization and Hyperparameter Search
Figure 4 for Optimizing Explanations by Network Canonization and Hyperparameter Search
Viaarxiv icon

Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations

Add code
Bookmark button
Alert button
Nov 22, 2022
Alexander Binder, Leander Weber, Sebastian Lapuschkin, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek

Figure 1 for Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations
Figure 2 for Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations
Figure 3 for Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations
Figure 4 for Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations
Viaarxiv icon