Alert button
Picture for Gregory Plumb

Gregory Plumb

Alert button

Where Does My Model Underperform? A Human Evaluation of Slice Discovery Algorithms

Jun 13, 2023
Nari Johnson, Ángel Alexander Cabrera, Gregory Plumb, Ameet Talwalkar

Figure 1 for Where Does My Model Underperform? A Human Evaluation of Slice Discovery Algorithms
Figure 2 for Where Does My Model Underperform? A Human Evaluation of Slice Discovery Algorithms
Figure 3 for Where Does My Model Underperform? A Human Evaluation of Slice Discovery Algorithms
Figure 4 for Where Does My Model Underperform? A Human Evaluation of Slice Discovery Algorithms
Viaarxiv icon

Evaluating Systemic Error Detection Methods using Synthetic Images

Jul 08, 2022
Gregory Plumb, Nari Johnson, Ángel Alexander Cabrera, Marco Tulio Ribeiro, Ameet Talwalkar

Figure 1 for Evaluating Systemic Error Detection Methods using Synthetic Images
Figure 2 for Evaluating Systemic Error Detection Methods using Synthetic Images
Figure 3 for Evaluating Systemic Error Detection Methods using Synthetic Images
Figure 4 for Evaluating Systemic Error Detection Methods using Synthetic Images
Viaarxiv icon

Use-Case-Grounded Simulations for Explanation Evaluation

Jun 05, 2022
Valerie Chen, Nari Johnson, Nicholay Topin, Gregory Plumb, Ameet Talwalkar

Figure 1 for Use-Case-Grounded Simulations for Explanation Evaluation
Figure 2 for Use-Case-Grounded Simulations for Explanation Evaluation
Figure 3 for Use-Case-Grounded Simulations for Explanation Evaluation
Figure 4 for Use-Case-Grounded Simulations for Explanation Evaluation
Viaarxiv icon

Finding and Fixing Spurious Patterns with Explanations

Jun 03, 2021
Gregory Plumb, Marco Tulio Ribeiro, Ameet Talwalkar

Figure 1 for Finding and Fixing Spurious Patterns with Explanations
Figure 2 for Finding and Fixing Spurious Patterns with Explanations
Figure 3 for Finding and Fixing Spurious Patterns with Explanations
Figure 4 for Finding and Fixing Spurious Patterns with Explanations
Viaarxiv icon

Sanity Simulations for Saliency Methods

May 13, 2021
Joon Sik Kim, Gregory Plumb, Ameet Talwalkar

Figure 1 for Sanity Simulations for Saliency Methods
Figure 2 for Sanity Simulations for Saliency Methods
Figure 3 for Sanity Simulations for Saliency Methods
Figure 4 for Sanity Simulations for Saliency Methods
Viaarxiv icon

Towards Connecting Use Cases and Methods in Interpretable Machine Learning

Mar 10, 2021
Valerie Chen, Jeffrey Li, Joon Sik Kim, Gregory Plumb, Ameet Talwalkar

Figure 1 for Towards Connecting Use Cases and Methods in Interpretable Machine Learning
Figure 2 for Towards Connecting Use Cases and Methods in Interpretable Machine Learning
Figure 3 for Towards Connecting Use Cases and Methods in Interpretable Machine Learning
Figure 4 for Towards Connecting Use Cases and Methods in Interpretable Machine Learning
Viaarxiv icon

A Learning Theoretic Perspective on Local Explainability

Nov 02, 2020
Jeffrey Li, Vaishnavh Nagarajan, Gregory Plumb, Ameet Talwalkar

Figure 1 for A Learning Theoretic Perspective on Local Explainability
Figure 2 for A Learning Theoretic Perspective on Local Explainability
Viaarxiv icon

Explaining Groups of Points in Low-Dimensional Representations

Mar 18, 2020
Gregory Plumb, Jonathan Terhorst, Sriram Sankararaman, Ameet Talwalkar

Figure 1 for Explaining Groups of Points in Low-Dimensional Representations
Figure 2 for Explaining Groups of Points in Low-Dimensional Representations
Figure 3 for Explaining Groups of Points in Low-Dimensional Representations
Figure 4 for Explaining Groups of Points in Low-Dimensional Representations
Viaarxiv icon

Regularizing Black-box Models for Improved Interpretability (HILL 2019 Version)

May 31, 2019
Gregory Plumb, Maruan Al-Shedivat, Eric Xing, Ameet Talwalkar

Figure 1 for Regularizing Black-box Models for Improved Interpretability (HILL 2019 Version)
Figure 2 for Regularizing Black-box Models for Improved Interpretability (HILL 2019 Version)
Figure 3 for Regularizing Black-box Models for Improved Interpretability (HILL 2019 Version)
Figure 4 for Regularizing Black-box Models for Improved Interpretability (HILL 2019 Version)
Viaarxiv icon