Alert button
Picture for Sebastian Lapuschkin

Sebastian Lapuschkin

Alert button

From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation

Add code
Bookmark button
Alert button
Jun 07, 2022
Reduan Achtibat, Maximilian Dreyer, Ilona Eisenbraun, Sebastian Bosse, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin

Figure 1 for From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation
Figure 2 for From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation
Figure 3 for From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation
Figure 4 for From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation
Viaarxiv icon

Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI

Add code
Bookmark button
Alert button
May 11, 2022
Sami Ede, Serop Baghdadlian, Leander Weber, An Nguyen, Dario Zanca, Wojciech Samek, Sebastian Lapuschkin

Figure 1 for Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI
Figure 2 for Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI
Figure 3 for Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI
Figure 4 for Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI
Viaarxiv icon

But that's not why: Inference adjustment by interactive prototype deselection

Add code
Bookmark button
Alert button
Mar 18, 2022
Michael Gerstenberger, Sebastian Lapuschkin, Peter Eisert, Sebastian Bosse

Figure 1 for But that's not why: Inference adjustment by interactive prototype deselection
Figure 2 for But that's not why: Inference adjustment by interactive prototype deselection
Figure 3 for But that's not why: Inference adjustment by interactive prototype deselection
Figure 4 for But that's not why: Inference adjustment by interactive prototype deselection
Viaarxiv icon

Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement

Add code
Bookmark button
Alert button
Mar 15, 2022
Leander Weber, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek

Figure 1 for Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement
Figure 2 for Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement
Figure 3 for Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement
Figure 4 for Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement
Viaarxiv icon

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations

Add code
Bookmark button
Alert button
Feb 14, 2022
Anna Hedström, Leander Weber, Dilyara Bareeva, Franz Motzkus, Wojciech Samek, Sebastian Lapuschkin, Marina M. -C. Höhne

Figure 1 for Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations
Figure 2 for Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations
Viaarxiv icon

Measurably Stronger Explanation Reliability via Model Canonization

Add code
Bookmark button
Alert button
Feb 14, 2022
Franz Motzkus, Leander Weber, Sebastian Lapuschkin

Figure 1 for Measurably Stronger Explanation Reliability via Model Canonization
Figure 2 for Measurably Stronger Explanation Reliability via Model Canonization
Figure 3 for Measurably Stronger Explanation Reliability via Model Canonization
Figure 4 for Measurably Stronger Explanation Reliability via Model Canonization
Viaarxiv icon

PatClArC: Using Pattern Concept Activation Vectors for Noise-Robust Model Debugging

Add code
Bookmark button
Alert button
Feb 07, 2022
Frederik Pahde, Leander Weber, Christopher J. Anders, Wojciech Samek, Sebastian Lapuschkin

Viaarxiv icon

ECQ$^{\text{x}}$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs

Add code
Bookmark button
Alert button
Sep 09, 2021
Daniel Becking, Maximilian Dreyer, Wojciech Samek, Karsten Müller, Sebastian Lapuschkin

Figure 1 for ECQ$^{\text{x}}$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs
Figure 2 for ECQ$^{\text{x}}$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs
Figure 3 for ECQ$^{\text{x}}$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs
Figure 4 for ECQ$^{\text{x}}$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs
Viaarxiv icon

Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy

Add code
Bookmark button
Alert button
Jun 24, 2021
Christopher J. Anders, David Neumann, Wojciech Samek, Klaus-Robert Müller, Sebastian Lapuschkin

Figure 1 for Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
Figure 2 for Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
Figure 3 for Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
Viaarxiv icon