Picture for Hubert Baniecki

Hubert Baniecki

Explaining Similarity in Vision-Language Encoders with Weighted Banzhaf Interactions

Add code
Aug 07, 2025
Viaarxiv icon

Birds look like cars: Adversarial analysis of intrinsically interpretable deep learning

Add code
Mar 11, 2025
Viaarxiv icon

Interpreting CLIP with Hierarchical Sparse Autoencoders

Add code
Feb 27, 2025
Viaarxiv icon

shapiq: Shapley Interactions for Machine Learning

Add code
Oct 02, 2024
Viaarxiv icon

Aggregated Attributions for Explanatory Analysis of 3D Segmentation Models

Add code
Jul 24, 2024
Viaarxiv icon

Efficient and Accurate Explanation Estimation with Distribution Compression

Add code
Jun 26, 2024
Viaarxiv icon

On the Robustness of Global Feature Effect Explanations

Add code
Jun 13, 2024
Viaarxiv icon

Red-Teaming Segment Anything Model

Add code
Apr 02, 2024
Viaarxiv icon

Interpretable Machine Learning for Survival Analysis

Add code
Mar 15, 2024
Viaarxiv icon

Red Teaming Models for Hyperspectral Image Analysis Using Explainable AI

Add code
Mar 14, 2024
Figure 1 for Red Teaming Models for Hyperspectral Image Analysis Using Explainable AI
Figure 2 for Red Teaming Models for Hyperspectral Image Analysis Using Explainable AI
Figure 3 for Red Teaming Models for Hyperspectral Image Analysis Using Explainable AI
Figure 4 for Red Teaming Models for Hyperspectral Image Analysis Using Explainable AI
Viaarxiv icon