Alert button
Picture for Anna Hedström

Anna Hedström

Alert button

Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test

Add code
Bookmark button
Alert button
Jan 12, 2024
Anna Hedström, Leander Weber, Sebastian Lapuschkin, Marina MC Höhne

Viaarxiv icon

Explainable AI in Grassland Monitoring: Enhancing Model Performance and Domain Adaptability

Add code
Bookmark button
Alert button
Dec 13, 2023
Shanghua Liu, Anna Hedström, Deepak Hanike Basavegowda, Cornelia Weltzien, Marina M. -C. Höhne

Viaarxiv icon

Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science

Add code
Bookmark button
Alert button
Mar 01, 2023
Philine Bommer, Marlene Kretschmer, Anna Hedström, Dilyara Bareeva, Marina M. -C. Höhne

Figure 1 for Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science
Figure 2 for Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science
Figure 3 for Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science
Figure 4 for Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science
Viaarxiv icon

The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus

Add code
Bookmark button
Alert button
Feb 14, 2023
Anna Hedström, Philine Bommer, Kristoffer K. Wickstrøm, Wojciech Samek, Sebastian Lapuschkin, Marina M. -C. Höhne

Figure 1 for The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Figure 2 for The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Figure 3 for The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Figure 4 for The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Viaarxiv icon

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations

Add code
Bookmark button
Alert button
Feb 14, 2022
Anna Hedström, Leander Weber, Dilyara Bareeva, Franz Motzkus, Wojciech Samek, Sebastian Lapuschkin, Marina M. -C. Höhne

Figure 1 for Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations
Figure 2 for Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations
Viaarxiv icon

NoiseGrad: enhancing explanations by introducing stochasticity to model weights

Add code
Bookmark button
Alert button
Jun 18, 2021
Kirill Bykov, Anna Hedström, Shinichi Nakajima, Marina M. -C. Höhne

Figure 1 for NoiseGrad: enhancing explanations by introducing stochasticity to model weights
Figure 2 for NoiseGrad: enhancing explanations by introducing stochasticity to model weights
Figure 3 for NoiseGrad: enhancing explanations by introducing stochasticity to model weights
Figure 4 for NoiseGrad: enhancing explanations by introducing stochasticity to model weights
Viaarxiv icon