Alert button
Picture for Agustin Picard

Agustin Picard

Alert button

TaCo: Targeted Concept Removal in Output Embeddings for NLP via Information Theory and Explainability

Add code
Bookmark button
Alert button
Dec 11, 2023
Fanny Jourdan, Louis Béthune, Agustin Picard, Laurent Risser, Nicholas Asher

Viaarxiv icon

Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization

Add code
Bookmark button
Alert button
Jun 11, 2023
Thomas Fel, Thibaut Boissin, Victor Boutin, Agustin Picard, Paul Novello, Julien Colin, Drew Linsley, Tom Rousseau, Rémi Cadène, Laurent Gardes, Thomas Serre

Figure 1 for Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization
Figure 2 for Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization
Figure 3 for Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization
Figure 4 for Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization
Viaarxiv icon

COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks

Add code
Bookmark button
Alert button
May 14, 2023
Fanny Jourdan, Agustin Picard, Thomas Fel, Laurent Risser, Jean Michel Loubes, Nicholas Asher

Figure 1 for COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks
Figure 2 for COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks
Figure 3 for COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks
Figure 4 for COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks
Viaarxiv icon

CRAFT: Concept Recursive Activation FacTorization for Explainability

Add code
Bookmark button
Alert button
Nov 17, 2022
Thomas Fel, Agustin Picard, Louis Bethune, Thibaut Boissin, David Vigouroux, Julien Colin, Rémi Cadène, Thomas Serre

Figure 1 for CRAFT: Concept Recursive Activation FacTorization for Explainability
Figure 2 for CRAFT: Concept Recursive Activation FacTorization for Explainability
Figure 3 for CRAFT: Concept Recursive Activation FacTorization for Explainability
Figure 4 for CRAFT: Concept Recursive Activation FacTorization for Explainability
Viaarxiv icon

A survey of Identification and mitigation of Machine Learning algorithmic biases in Image Analysis

Add code
Bookmark button
Alert button
Oct 10, 2022
Laurent Risser, Agustin Picard, Lucas Hervier, Jean-Michel Loubes

Figure 1 for A survey of Identification and mitigation of Machine Learning algorithmic biases in Image Analysis
Figure 2 for A survey of Identification and mitigation of Machine Learning algorithmic biases in Image Analysis
Figure 3 for A survey of Identification and mitigation of Machine Learning algorithmic biases in Image Analysis
Figure 4 for A survey of Identification and mitigation of Machine Learning algorithmic biases in Image Analysis
Viaarxiv icon

Xplique: A Deep Learning Explainability Toolbox

Add code
Bookmark button
Alert button
Jun 09, 2022
Thomas Fel, Lucas Hervier, David Vigouroux, Antonin Poche, Justin Plakoo, Remi Cadene, Mathieu Chalvidal, Julien Colin, Thibaut Boissin, Louis Bethune, Agustin Picard, Claire Nicodeme, Laurent Gardes, Gregory Flandin, Thomas Serre

Figure 1 for Xplique: A Deep Learning Explainability Toolbox
Viaarxiv icon