Picture for Sebastian Lapuschkin

Sebastian Lapuschkin

Deep Learning-based Multi Project InP Wafer Simulation for Unsupervised Surface Defect Detection

Add code
Jun 12, 2025
Viaarxiv icon

Relevance-driven Input Dropout: an Explanation-guided Regularization Technique

Add code
May 27, 2025
Viaarxiv icon

From What to How: Attributing CLIP's Latent Components Reveals Unexpected Semantic Reliance

Add code
May 26, 2025
Viaarxiv icon

The Atlas of In-Context Learning: How Attention Heads Shape In-Context Retrieval Augmentation

Add code
May 21, 2025
Viaarxiv icon

Prisma: An Open Source Toolkit for Mechanistic Interpretability in Vision and Video

Add code
Apr 28, 2025
Viaarxiv icon

ASIDE: Architectural Separation of Instructions and Data in Language Models

Add code
Mar 13, 2025
Viaarxiv icon

Post-Hoc Concept Disentanglement: From Correlated to Isolated Concept Representations

Add code
Mar 07, 2025
Viaarxiv icon

FADE: Why Bad Descriptions Happen to Good Features

Add code
Feb 24, 2025
Viaarxiv icon

A Close Look at Decomposition-based XAI-Methods for Transformer Language Models

Add code
Feb 21, 2025
Viaarxiv icon

Ensuring Medical AI Safety: Explainable AI-Driven Detection and Mitigation of Spurious Model Behavior and Associated Data

Add code
Jan 23, 2025
Figure 1 for Ensuring Medical AI Safety: Explainable AI-Driven Detection and Mitigation of Spurious Model Behavior and Associated Data
Figure 2 for Ensuring Medical AI Safety: Explainable AI-Driven Detection and Mitigation of Spurious Model Behavior and Associated Data
Figure 3 for Ensuring Medical AI Safety: Explainable AI-Driven Detection and Mitigation of Spurious Model Behavior and Associated Data
Figure 4 for Ensuring Medical AI Safety: Explainable AI-Driven Detection and Mitigation of Spurious Model Behavior and Associated Data
Viaarxiv icon