Alert button
Picture for Christopher J. Anders

Christopher J. Anders

Alert button

Towards Fixing Clever-Hans Predictors with Counterfactual Knowledge Distillation

Add code
Bookmark button
Alert button
Oct 03, 2023
Sidney Bender, Christopher J. Anders, Pattarawatt Chormai, Heike Marxfeld, Jan Herrmann, Grégoire Montavon

Viaarxiv icon

From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space

Add code
Bookmark button
Alert button
Aug 18, 2023
Maximilian Dreyer, Frederik Pahde, Christopher J. Anders, Wojciech Samek, Sebastian Lapuschkin

Figure 1 for From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space
Figure 2 for From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space
Figure 3 for From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space
Figure 4 for From Hope to Safety: Unlearning Biases of Deep Models by Enforcing the Right Reasons in Latent Space
Viaarxiv icon

Detecting and Mitigating Mode-Collapse for Flow-based Sampling of Lattice Field Theories

Add code
Bookmark button
Alert button
Feb 27, 2023
Kim A. Nicoli, Christopher J. Anders, Tobias Hartung, Karl Jansen, Pan Kessel, Shinichi Nakajima

Figure 1 for Detecting and Mitigating Mode-Collapse for Flow-based Sampling of Lattice Field Theories
Figure 2 for Detecting and Mitigating Mode-Collapse for Flow-based Sampling of Lattice Field Theories
Figure 3 for Detecting and Mitigating Mode-Collapse for Flow-based Sampling of Lattice Field Theories
Figure 4 for Detecting and Mitigating Mode-Collapse for Flow-based Sampling of Lattice Field Theories
Viaarxiv icon

PatClArC: Using Pattern Concept Activation Vectors for Noise-Robust Model Debugging

Add code
Bookmark button
Alert button
Feb 07, 2022
Frederik Pahde, Leander Weber, Christopher J. Anders, Wojciech Samek, Sebastian Lapuschkin

Viaarxiv icon

Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy

Add code
Bookmark button
Alert button
Jun 24, 2021
Christopher J. Anders, David Neumann, Wojciech Samek, Klaus-Robert Müller, Sebastian Lapuschkin

Figure 1 for Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
Figure 2 for Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
Figure 3 for Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
Viaarxiv icon

Towards Robust Explanations for Deep Neural Networks

Add code
Bookmark button
Alert button
Dec 18, 2020
Ann-Kathrin Dombrowski, Christopher J. Anders, Klaus-Robert Müller, Pan Kessel

Figure 1 for Towards Robust Explanations for Deep Neural Networks
Figure 2 for Towards Robust Explanations for Deep Neural Networks
Figure 3 for Towards Robust Explanations for Deep Neural Networks
Figure 4 for Towards Robust Explanations for Deep Neural Networks
Viaarxiv icon

Fairwashing Explanations with Off-Manifold Detergent

Add code
Bookmark button
Alert button
Jul 20, 2020
Christopher J. Anders, Plamen Pasliev, Ann-Kathrin Dombrowski, Klaus-Robert Müller, Pan Kessel

Figure 1 for Fairwashing Explanations with Off-Manifold Detergent
Figure 2 for Fairwashing Explanations with Off-Manifold Detergent
Figure 3 for Fairwashing Explanations with Off-Manifold Detergent
Figure 4 for Fairwashing Explanations with Off-Manifold Detergent
Viaarxiv icon

On Estimation of Thermodynamic Observables in Lattice Field Theories with Deep Generative Models

Add code
Bookmark button
Alert button
Jul 14, 2020
Kim A. Nicoli, Christopher J. Anders, Lena Funcke, Tobias Hartung, Karl Jansen, Pan Kessel, Shinichi Nakajima, Paolo Stornati

Figure 1 for On Estimation of Thermodynamic Observables in Lattice Field Theories with Deep Generative Models
Figure 2 for On Estimation of Thermodynamic Observables in Lattice Field Theories with Deep Generative Models
Figure 3 for On Estimation of Thermodynamic Observables in Lattice Field Theories with Deep Generative Models
Figure 4 for On Estimation of Thermodynamic Observables in Lattice Field Theories with Deep Generative Models
Viaarxiv icon

Toward Interpretable Machine Learning: Transparent Deep Neural Networks and Beyond

Add code
Bookmark button
Alert button
Mar 17, 2020
Wojciech Samek, Grégoire Montavon, Sebastian Lapuschkin, Christopher J. Anders, Klaus-Robert Müller

Figure 1 for Toward Interpretable Machine Learning: Transparent Deep Neural Networks and Beyond
Figure 2 for Toward Interpretable Machine Learning: Transparent Deep Neural Networks and Beyond
Figure 3 for Toward Interpretable Machine Learning: Transparent Deep Neural Networks and Beyond
Figure 4 for Toward Interpretable Machine Learning: Transparent Deep Neural Networks and Beyond
Viaarxiv icon

Analyzing ImageNet with Spectral Relevance Analysis: Towards ImageNet un-Hans'ed

Add code
Bookmark button
Alert button
Dec 22, 2019
Christopher J. Anders, Talmaj Marinč, David Neumann, Wojciech Samek, Klaus-Robert Müller, Sebastian Lapuschkin

Figure 1 for Analyzing ImageNet with Spectral Relevance Analysis: Towards ImageNet un-Hans'ed
Figure 2 for Analyzing ImageNet with Spectral Relevance Analysis: Towards ImageNet un-Hans'ed
Figure 3 for Analyzing ImageNet with Spectral Relevance Analysis: Towards ImageNet un-Hans'ed
Figure 4 for Analyzing ImageNet with Spectral Relevance Analysis: Towards ImageNet un-Hans'ed
Viaarxiv icon