Picture for Stefan Haufe

Stefan Haufe

cc-Shapley: Measuring Multivariate Feature Importance Needs Causal Context

Add code
Feb 23, 2026
Viaarxiv icon

Feature salience -- not task-informativeness -- drives machine learning model explanations

Add code
Feb 15, 2026
Viaarxiv icon

Generative clinical time series models trained on moderate amounts of patient data are privacy preserving

Add code
Feb 11, 2026
Viaarxiv icon

The effect of whitening on explanation performance

Add code
Feb 09, 2026
Viaarxiv icon

Minimizing False-Positive Attributions in Explanations of Non-Linear Models

Add code
May 16, 2025
Viaarxiv icon

Enhancing Brain Source Reconstruction through Physics-Informed 3D Neural Networks

Add code
Oct 31, 2024
Viaarxiv icon

Explainable AI needs formal notions of explanation correctness

Add code
Sep 26, 2024
Viaarxiv icon

GECOBench: A Gender-Controlled Text Dataset and Benchmark for Quantifying Biases in Explanations

Add code
Jun 17, 2024
Figure 1 for GECOBench: A Gender-Controlled Text Dataset and Benchmark for Quantifying Biases in Explanations
Figure 2 for GECOBench: A Gender-Controlled Text Dataset and Benchmark for Quantifying Biases in Explanations
Figure 3 for GECOBench: A Gender-Controlled Text Dataset and Benchmark for Quantifying Biases in Explanations
Figure 4 for GECOBench: A Gender-Controlled Text Dataset and Benchmark for Quantifying Biases in Explanations
Viaarxiv icon

EXACT: Towards a platform for empirically benchmarking Machine Learning model explanation methods

Add code
May 20, 2024
Viaarxiv icon

XAI-TRIS: Non-linear benchmarks to quantify ML explanation performance

Add code
Jun 22, 2023
Viaarxiv icon