Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

From "Where" to "What": Towards Human-Understandable Explanations through Concept Relevance Propagation


Jun 07, 2022
Reduan Achtibat, Maximilian Dreyer, Ilona Eisenbraun, Sebastian Bosse, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin

* 79 pages (40 pages manuscript, 10 pages references, 29 pages appendix) 51 figures (26 in manuscript, 25 in appendix) 1 table (in appendix) 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI


May 11, 2022
Sami Ede, Serop Baghdadlian, Leander Weber, An Nguyen, Dario Zanca, Wojciech Samek, Sebastian Lapuschkin

* 14 pages including appendix, 5 figures, 2 tables, 1 algorithm listing. v2 update increases figure readability, updates Fig 5 caption, adds our collaborators Dario and An as co-authors 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

But that's not why: Inference adjustment by interactive prototype deselection


Mar 18, 2022
Michael Gerstenberger, Sebastian Lapuschkin, Peter Eisert, Sebastian Bosse


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement


Mar 15, 2022
Leander Weber, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations


Feb 14, 2022
Anna Hedström, Leander Weber, Dilyara Bareeva, Franz Motzkus, Wojciech Samek, Sebastian Lapuschkin, Marina M. -C. Höhne

* 4 pages, 1 figure, 1 table 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Measurably Stronger Explanation Reliability via Model Canonization


Feb 14, 2022
Franz Motzkus, Leander Weber, Sebastian Lapuschkin

* 5 pages, 4 figures 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

PatClArC: Using Pattern Concept Activation Vectors for Noise-Robust Model Debugging


Feb 07, 2022
Frederik Pahde, Leander Weber, Christopher J. Anders, Wojciech Samek, Sebastian Lapuschkin


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

ECQ$^{\text{x}}$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs


Sep 09, 2021
Daniel Becking, Maximilian Dreyer, Wojciech Samek, Karsten Müller, Sebastian Lapuschkin

* 21 pages, 10 figures, 1 table 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy


Jun 24, 2021
Christopher J. Anders, David Neumann, Wojciech Samek, Klaus-Robert Müller, Sebastian Lapuschkin

* 10 pages, 3 figures 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email
1
2
3
>>