Picture for Wojciech Samek

Wojciech Samek

Towards Visually Explaining Statistical Tests with Applications in Biomedical Imaging

Add code
Jan 20, 2026
Viaarxiv icon

Multimodal Deep Learning for Prediction of Progression-Free Survival in Patients with Neuroendocrine Tumors Undergoing 177Lu-based Peptide Receptor Radionuclide Therapy

Add code
Nov 07, 2025
Figure 1 for Multimodal Deep Learning for Prediction of Progression-Free Survival in Patients with Neuroendocrine Tumors Undergoing 177Lu-based Peptide Receptor Radionuclide Therapy
Figure 2 for Multimodal Deep Learning for Prediction of Progression-Free Survival in Patients with Neuroendocrine Tumors Undergoing 177Lu-based Peptide Receptor Radionuclide Therapy
Figure 3 for Multimodal Deep Learning for Prediction of Progression-Free Survival in Patients with Neuroendocrine Tumors Undergoing 177Lu-based Peptide Receptor Radionuclide Therapy
Figure 4 for Multimodal Deep Learning for Prediction of Progression-Free Survival in Patients with Neuroendocrine Tumors Undergoing 177Lu-based Peptide Receptor Radionuclide Therapy
Viaarxiv icon

Atlas-Alignment: Making Interpretability Transferable Across Language Models

Add code
Oct 31, 2025
Viaarxiv icon

LieSolver: A PDE-constrained solver for IBVPs using Lie symmetries

Add code
Oct 29, 2025
Viaarxiv icon

Model Science: getting serious about verification, explanation and control of AI systems

Add code
Aug 27, 2025
Viaarxiv icon

Attribution-guided Pruning for Compression, Circuit Discovery, and Targeted Correction in LLMs

Add code
Jun 16, 2025
Viaarxiv icon

Deep Learning-based Multi Project InP Wafer Simulation for Unsupervised Surface Defect Detection

Add code
Jun 12, 2025
Viaarxiv icon

Relevance-driven Input Dropout: an Explanation-guided Regularization Technique

Add code
May 27, 2025
Viaarxiv icon

From What to How: Attributing CLIP's Latent Components Reveals Unexpected Semantic Reliance

Add code
May 26, 2025
Figure 1 for From What to How: Attributing CLIP's Latent Components Reveals Unexpected Semantic Reliance
Figure 2 for From What to How: Attributing CLIP's Latent Components Reveals Unexpected Semantic Reliance
Figure 3 for From What to How: Attributing CLIP's Latent Components Reveals Unexpected Semantic Reliance
Figure 4 for From What to How: Attributing CLIP's Latent Components Reveals Unexpected Semantic Reliance
Viaarxiv icon

The Atlas of In-Context Learning: How Attention Heads Shape In-Context Retrieval Augmentation

Add code
May 21, 2025
Viaarxiv icon