Picture for Sebastian Lapuschkin

Sebastian Lapuschkin

Concept-based explanations of Segmentation and Detection models in Natural Disaster Management

Add code
Mar 24, 2026
Viaarxiv icon

Building Trust in PINNs: Error Estimation through Finite Difference Methods

Add code
Mar 16, 2026
Viaarxiv icon

X-SYS: A Reference Architecture for Interactive Explanation Systems

Add code
Feb 13, 2026
Viaarxiv icon

Atlas-Alignment: Making Interpretability Transferable Across Language Models

Add code
Oct 31, 2025
Viaarxiv icon

LieSolver: A PDE-constrained solver for IBVPs using Lie symmetries

Add code
Oct 29, 2025
Viaarxiv icon

Attribution-guided Pruning for Compression, Circuit Discovery, and Targeted Correction in LLMs

Add code
Jun 16, 2025
Viaarxiv icon

Deep Learning-based Multi Project InP Wafer Simulation for Unsupervised Surface Defect Detection

Add code
Jun 12, 2025
Viaarxiv icon

Relevance-driven Input Dropout: an Explanation-guided Regularization Technique

Add code
May 27, 2025
Viaarxiv icon

From What to How: Attributing CLIP's Latent Components Reveals Unexpected Semantic Reliance

Add code
May 26, 2025
Figure 1 for From What to How: Attributing CLIP's Latent Components Reveals Unexpected Semantic Reliance
Figure 2 for From What to How: Attributing CLIP's Latent Components Reveals Unexpected Semantic Reliance
Figure 3 for From What to How: Attributing CLIP's Latent Components Reveals Unexpected Semantic Reliance
Figure 4 for From What to How: Attributing CLIP's Latent Components Reveals Unexpected Semantic Reliance
Viaarxiv icon

The Atlas of In-Context Learning: How Attention Heads Shape In-Context Retrieval Augmentation

Add code
May 21, 2025
Viaarxiv icon