Alert button
Picture for Maximilian Dreyer

Maximilian Dreyer

Alert button

Explainable concept mappings of MRI: Revealing the mechanisms underlying deep learning-based brain disease classification

Add code
Bookmark button
Alert button
Apr 16, 2024
Christian Tinauer, Anna Damulina, Maximilian Sackl, Martin Soellradl, Reduan Achtibat, Maximilian Dreyer, Frederik Pahde, Sebastian Lapuschkin, Reinhold Schmidt, Stefan Ropele, Wojciech Samek, Christian Langkammer

Viaarxiv icon

Reactive Model Correction: Mitigating Harm to Task-Relevant Features via Conditional Bias Suppression

Add code
Bookmark button
Alert button
Apr 15, 2024
Dilyara Bareeva, Maximilian Dreyer, Frederik Pahde, Wojciech Samek, Sebastian Lapuschkin

Viaarxiv icon

PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits

Add code
Bookmark button
Alert button
Apr 09, 2024
Maximilian Dreyer, Erblina Purelku, Johanna Vielhaben, Wojciech Samek, Sebastian Lapuschkin

Viaarxiv icon

AttnLRP: Attention-Aware Layer-wise Relevance Propagation for Transformers

Add code
Bookmark button
Alert button
Feb 08, 2024
Reduan Achtibat, Sayed Mohammad Vakilzadeh Hatefi, Maximilian Dreyer, Aakriti Jain, Thomas Wiegand, Sebastian Lapuschkin, Wojciech Samek

Viaarxiv icon

Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations

Add code
Bookmark button
Alert button
Nov 28, 2023
Maximilian Dreyer, Reduan Achtibat, Wojciech Samek, Sebastian Lapuschkin

Viaarxiv icon

From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space

Add code
Bookmark button
Alert button
Aug 18, 2023
Maximilian Dreyer, Frederik Pahde, Christopher J. Anders, Wojciech Samek, Sebastian Lapuschkin

Figure 1 for From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space
Figure 2 for From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space
Figure 3 for From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space
Figure 4 for From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space
Viaarxiv icon

Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models

Add code
Bookmark button
Alert button
Mar 27, 2023
Frederik Pahde, Maximilian Dreyer, Wojciech Samek, Sebastian Lapuschkin

Figure 1 for Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models
Figure 2 for Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models
Figure 3 for Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models
Figure 4 for Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models
Viaarxiv icon

Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations

Add code
Bookmark button
Alert button
Nov 21, 2022
Maximilian Dreyer, Reduan Achtibat, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin

Figure 1 for Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations
Figure 2 for Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations
Figure 3 for Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations
Figure 4 for Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations
Viaarxiv icon

From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation

Add code
Bookmark button
Alert button
Jun 07, 2022
Reduan Achtibat, Maximilian Dreyer, Ilona Eisenbraun, Sebastian Bosse, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin

Figure 1 for From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation
Figure 2 for From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation
Figure 3 for From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation
Figure 4 for From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation
Viaarxiv icon

ECQ$^{\text{x}}$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs

Add code
Bookmark button
Alert button
Sep 09, 2021
Daniel Becking, Maximilian Dreyer, Wojciech Samek, Karsten Müller, Sebastian Lapuschkin

Figure 1 for ECQ$^{\text{x}}$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs
Figure 2 for ECQ$^{\text{x}}$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs
Figure 3 for ECQ$^{\text{x}}$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs
Figure 4 for ECQ$^{\text{x}}$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs
Viaarxiv icon