Alert button
Picture for Been Kim

Been Kim

Alert button

Debugging Tests for Model Explanations

Add code
Bookmark button
Alert button
Nov 10, 2020
Julius Adebayo, Michael Muelly, Ilaria Liccardi, Been Kim

Figure 1 for Debugging Tests for Model Explanations
Figure 2 for Debugging Tests for Model Explanations
Figure 3 for Debugging Tests for Model Explanations
Figure 4 for Debugging Tests for Model Explanations
Viaarxiv icon

Concept Bottleneck Models

Add code
Bookmark button
Alert button
Jul 09, 2020
Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, Percy Liang

Figure 1 for Concept Bottleneck Models
Figure 2 for Concept Bottleneck Models
Figure 3 for Concept Bottleneck Models
Figure 4 for Concept Bottleneck Models
Viaarxiv icon

On Concept-Based Explanations in Deep Neural Networks

Add code
Bookmark button
Alert button
Oct 17, 2019
Chih-Kuan Yeh, Been Kim, Sercan O. Arik, Chun-Liang Li, Pradeep Ravikumar, Tomas Pfister

Figure 1 for On Concept-Based Explanations in Deep Neural Networks
Figure 2 for On Concept-Based Explanations in Deep Neural Networks
Figure 3 for On Concept-Based Explanations in Deep Neural Networks
Figure 4 for On Concept-Based Explanations in Deep Neural Networks
Viaarxiv icon

BIM: Towards Quantitative Evaluation of Interpretability Methods with Ground Truth

Add code
Bookmark button
Alert button
Jul 23, 2019
Mengjiao Yang, Been Kim

Figure 1 for BIM: Towards Quantitative Evaluation of Interpretability Methods with Ground Truth
Figure 2 for BIM: Towards Quantitative Evaluation of Interpretability Methods with Ground Truth
Figure 3 for BIM: Towards Quantitative Evaluation of Interpretability Methods with Ground Truth
Figure 4 for BIM: Towards Quantitative Evaluation of Interpretability Methods with Ground Truth
Viaarxiv icon

Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems

Add code
Bookmark button
Alert button
Jul 22, 2019
Shalmali Joshi, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, Joydeep Ghosh

Figure 1 for Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems
Figure 2 for Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems
Figure 3 for Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems
Figure 4 for Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems
Viaarxiv icon

Explaining Classifiers with Causal Concept Effect (CaCE)

Add code
Bookmark button
Alert button
Jul 16, 2019
Yash Goyal, Uri Shalit, Been Kim

Figure 1 for Explaining Classifiers with Causal Concept Effect (CaCE)
Figure 2 for Explaining Classifiers with Causal Concept Effect (CaCE)
Figure 3 for Explaining Classifiers with Causal Concept Effect (CaCE)
Figure 4 for Explaining Classifiers with Causal Concept Effect (CaCE)
Viaarxiv icon

Visualizing and Measuring the Geometry of BERT

Add code
Bookmark button
Alert button
Jun 06, 2019
Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda Viégas, Martin Wattenberg

Figure 1 for Visualizing and Measuring the Geometry of BERT
Figure 2 for Visualizing and Measuring the Geometry of BERT
Figure 3 for Visualizing and Measuring the Geometry of BERT
Figure 4 for Visualizing and Measuring the Geometry of BERT
Viaarxiv icon

Do Neural Networks Show Gestalt Phenomena? An Exploration of the Law of Closure

Add code
Bookmark button
Alert button
Mar 21, 2019
Been Kim, Emily Reif, Martin Wattenberg, Samy Bengio

Figure 1 for Do Neural Networks Show Gestalt Phenomena? An Exploration of the Law of Closure
Figure 2 for Do Neural Networks Show Gestalt Phenomena? An Exploration of the Law of Closure
Figure 3 for Do Neural Networks Show Gestalt Phenomena? An Exploration of the Law of Closure
Figure 4 for Do Neural Networks Show Gestalt Phenomena? An Exploration of the Law of Closure
Viaarxiv icon

Automating Interpretability: Discovering and Testing Visual Concepts Learned by Neural Networks

Add code
Bookmark button
Alert button
Feb 07, 2019
Amirata Ghorbani, James Wexler, Been Kim

Figure 1 for Automating Interpretability: Discovering and Testing Visual Concepts Learned by Neural Networks
Figure 2 for Automating Interpretability: Discovering and Testing Visual Concepts Learned by Neural Networks
Figure 3 for Automating Interpretability: Discovering and Testing Visual Concepts Learned by Neural Networks
Figure 4 for Automating Interpretability: Discovering and Testing Visual Concepts Learned by Neural Networks
Viaarxiv icon

An Evaluation of the Human-Interpretability of Explanation

Add code
Bookmark button
Alert button
Jan 31, 2019
Isaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Sam Gershman, Finale Doshi-Velez

Figure 1 for An Evaluation of the Human-Interpretability of Explanation
Figure 2 for An Evaluation of the Human-Interpretability of Explanation
Figure 3 for An Evaluation of the Human-Interpretability of Explanation
Figure 4 for An Evaluation of the Human-Interpretability of Explanation
Viaarxiv icon