Alert button
Picture for Been Kim

Been Kim

Alert button

Human-in-the-Loop Interpretability Prior

Add code
Bookmark button
Alert button
Oct 30, 2018
Isaac Lage, Andrew Slavin Ross, Been Kim, Samuel J. Gershman, Finale Doshi-Velez

Figure 1 for Human-in-the-Loop Interpretability Prior
Figure 2 for Human-in-the-Loop Interpretability Prior
Figure 3 for Human-in-the-Loop Interpretability Prior
Figure 4 for Human-in-the-Loop Interpretability Prior
Viaarxiv icon

Sanity Checks for Saliency Maps

Add code
Bookmark button
Alert button
Oct 28, 2018
Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, Been Kim

Figure 1 for Sanity Checks for Saliency Maps
Figure 2 for Sanity Checks for Saliency Maps
Figure 3 for Sanity Checks for Saliency Maps
Figure 4 for Sanity Checks for Saliency Maps
Viaarxiv icon

To Trust Or Not To Trust A Classifier

Add code
Bookmark button
Alert button
Oct 26, 2018
Heinrich Jiang, Been Kim, Melody Y. Guan, Maya Gupta

Figure 1 for To Trust Or Not To Trust A Classifier
Figure 2 for To Trust Or Not To Trust A Classifier
Figure 3 for To Trust Or Not To Trust A Classifier
Viaarxiv icon

Interpreting Black Box Predictions using Fisher Kernels

Add code
Bookmark button
Alert button
Oct 23, 2018
Rajiv Khanna, Been Kim, Joydeep Ghosh, Oluwasanmi Koyejo

Figure 1 for Interpreting Black Box Predictions using Fisher Kernels
Figure 2 for Interpreting Black Box Predictions using Fisher Kernels
Figure 3 for Interpreting Black Box Predictions using Fisher Kernels
Figure 4 for Interpreting Black Box Predictions using Fisher Kernels
Viaarxiv icon

Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values

Add code
Bookmark button
Alert button
Oct 08, 2018
Julius Adebayo, Justin Gilmer, Ian Goodfellow, Been Kim

Figure 1 for Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Figure 2 for Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Figure 3 for Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Figure 4 for Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Viaarxiv icon

Proceedings of the 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018)

Add code
Bookmark button
Alert button
Jul 03, 2018
Been Kim, Kush R. Varshney, Adrian Weller

Viaarxiv icon

Evaluating Feature Importance Estimates

Add code
Bookmark button
Alert button
Jun 28, 2018
Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, Been Kim

Figure 1 for Evaluating Feature Importance Estimates
Figure 2 for Evaluating Feature Importance Estimates
Figure 3 for Evaluating Feature Importance Estimates
Figure 4 for Evaluating Feature Importance Estimates
Viaarxiv icon

xGEMs: Generating Examplars to Explain Black-Box Models

Add code
Bookmark button
Alert button
Jun 22, 2018
Shalmali Joshi, Oluwasanmi Koyejo, Been Kim, Joydeep Ghosh

Figure 1 for xGEMs: Generating Examplars to Explain Black-Box Models
Figure 2 for xGEMs: Generating Examplars to Explain Black-Box Models
Figure 3 for xGEMs: Generating Examplars to Explain Black-Box Models
Figure 4 for xGEMs: Generating Examplars to Explain Black-Box Models
Viaarxiv icon

Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)

Add code
Bookmark button
Alert button
Jun 07, 2018
Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, Rory Sayres

Figure 1 for Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Figure 2 for Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Figure 3 for Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Figure 4 for Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Viaarxiv icon

How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation

Add code
Bookmark button
Alert button
Feb 02, 2018
Menaka Narayanan, Emily Chen, Jeffrey He, Been Kim, Sam Gershman, Finale Doshi-Velez

Figure 1 for How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Figure 2 for How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Figure 3 for How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Figure 4 for How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Viaarxiv icon