Picture for Been Kim

Been Kim

Visualizing and Measuring the Geometry of BERT

Add code
Jun 06, 2019
Figure 1 for Visualizing and Measuring the Geometry of BERT
Figure 2 for Visualizing and Measuring the Geometry of BERT
Figure 3 for Visualizing and Measuring the Geometry of BERT
Figure 4 for Visualizing and Measuring the Geometry of BERT
Viaarxiv icon

Do Neural Networks Show Gestalt Phenomena? An Exploration of the Law of Closure

Add code
Mar 21, 2019
Figure 1 for Do Neural Networks Show Gestalt Phenomena? An Exploration of the Law of Closure
Figure 2 for Do Neural Networks Show Gestalt Phenomena? An Exploration of the Law of Closure
Figure 3 for Do Neural Networks Show Gestalt Phenomena? An Exploration of the Law of Closure
Figure 4 for Do Neural Networks Show Gestalt Phenomena? An Exploration of the Law of Closure
Viaarxiv icon

Automating Interpretability: Discovering and Testing Visual Concepts Learned by Neural Networks

Add code
Feb 07, 2019
Figure 1 for Automating Interpretability: Discovering and Testing Visual Concepts Learned by Neural Networks
Figure 2 for Automating Interpretability: Discovering and Testing Visual Concepts Learned by Neural Networks
Figure 3 for Automating Interpretability: Discovering and Testing Visual Concepts Learned by Neural Networks
Figure 4 for Automating Interpretability: Discovering and Testing Visual Concepts Learned by Neural Networks
Viaarxiv icon

An Evaluation of the Human-Interpretability of Explanation

Add code
Jan 31, 2019
Figure 1 for An Evaluation of the Human-Interpretability of Explanation
Figure 2 for An Evaluation of the Human-Interpretability of Explanation
Figure 3 for An Evaluation of the Human-Interpretability of Explanation
Figure 4 for An Evaluation of the Human-Interpretability of Explanation
Viaarxiv icon

Human-in-the-Loop Interpretability Prior

Add code
Oct 30, 2018
Figure 1 for Human-in-the-Loop Interpretability Prior
Figure 2 for Human-in-the-Loop Interpretability Prior
Figure 3 for Human-in-the-Loop Interpretability Prior
Figure 4 for Human-in-the-Loop Interpretability Prior
Viaarxiv icon

Sanity Checks for Saliency Maps

Add code
Oct 28, 2018
Figure 1 for Sanity Checks for Saliency Maps
Figure 2 for Sanity Checks for Saliency Maps
Figure 3 for Sanity Checks for Saliency Maps
Figure 4 for Sanity Checks for Saliency Maps
Viaarxiv icon

To Trust Or Not To Trust A Classifier

Add code
Oct 26, 2018
Figure 1 for To Trust Or Not To Trust A Classifier
Figure 2 for To Trust Or Not To Trust A Classifier
Figure 3 for To Trust Or Not To Trust A Classifier
Viaarxiv icon

Interpreting Black Box Predictions using Fisher Kernels

Add code
Oct 23, 2018
Figure 1 for Interpreting Black Box Predictions using Fisher Kernels
Figure 2 for Interpreting Black Box Predictions using Fisher Kernels
Figure 3 for Interpreting Black Box Predictions using Fisher Kernels
Figure 4 for Interpreting Black Box Predictions using Fisher Kernels
Viaarxiv icon

Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values

Add code
Oct 08, 2018
Figure 1 for Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Figure 2 for Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Figure 3 for Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Figure 4 for Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Viaarxiv icon

Proceedings of the 2018 ICML Workshop on Human Interpretability in Machine Learning

Add code
Jul 03, 2018
Viaarxiv icon