Alert button
Picture for Marco Tulio Ribeiro

Marco Tulio Ribeiro

Alert button

Programs as Black-Box Explanations

Nov 22, 2016
Sameer Singh, Marco Tulio Ribeiro, Carlos Guestrin

Figure 1 for Programs as Black-Box Explanations
Figure 2 for Programs as Black-Box Explanations
Figure 3 for Programs as Black-Box Explanations
Figure 4 for Programs as Black-Box Explanations

Recent work in model-agnostic explanations of black-box machine learning has demonstrated that interpretability of complex models does not have to come at the cost of accuracy or model flexibility. However, it is not clear what kind of explanations, such as linear models, decision trees, and rule lists, are the appropriate family to consider, and different tasks and models may benefit from different kinds of explanations. Instead of picking a single family of representations, in this work we propose to use "programs" as model-agnostic explanations. We show that small programs can be expressive yet intuitive as explanations, and generalize over a number of existing interpretable families. We propose a prototype program induction method based on simulated annealing that approximates the local behavior of black-box classifiers around a specific prediction using random perturbations. Finally, we present preliminary application on small datasets and show that the generated explanations are intuitive and accurate for a number of classifiers.

* Presented at NIPS 2016 Workshop on Interpretable Machine Learning in Complex Systems 
Viaarxiv icon

Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance

Nov 17, 2016
Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin

Figure 1 for Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance
Figure 2 for Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance
Figure 3 for Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance
Figure 4 for Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance

At the core of interpretable machine learning is the question of whether humans are able to make accurate predictions about a model's behavior. Assumed in this question are three properties of the interpretable output: coverage, precision, and effort. Coverage refers to how often humans think they can predict the model's behavior, precision to how accurate humans are in those predictions, and effort is either the up-front effort required in interpreting the model, or the effort required to make predictions about a model's behavior. In this work, we propose anchor-LIME (aLIME), a model-agnostic technique that produces high-precision rule-based explanations for which the coverage boundaries are very clear. We compare aLIME to linear LIME with simulated experiments, and demonstrate the flexibility of aLIME with qualitative examples from a variety of domains and tasks.

* Presented at NIPS 2016 Workshop on Interpretable Machine Learning in Complex Systems 
Viaarxiv icon

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

Aug 09, 2016
Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin

Figure 1 for "Why Should I Trust You?": Explaining the Predictions of Any Classifier
Figure 2 for "Why Should I Trust You?": Explaining the Predictions of Any Classifier
Figure 3 for "Why Should I Trust You?": Explaining the Predictions of Any Classifier
Figure 4 for "Why Should I Trust You?": Explaining the Predictions of Any Classifier

Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.

Viaarxiv icon

Model-Agnostic Interpretability of Machine Learning

Jun 16, 2016
Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin

Figure 1 for Model-Agnostic Interpretability of Machine Learning
Figure 2 for Model-Agnostic Interpretability of Machine Learning

Understanding why machine learning models behave the way they do empowers both system designers and end-users in many ways: in model selection, feature engineering, in order to trust and act upon the predictions, and in more intuitive user interfaces. Thus, interpretability has become a vital concern in machine learning, and work in the area of interpretable models has found renewed interest. In some applications, such models are as accurate as non-interpretable ones, and thus are preferred for their transparency. Even when they are not accurate, they may still be preferred when interpretability is of paramount importance. However, restricting machine learning to interpretable models is often a severe limitation. In this paper we argue for explaining machine learning predictions using model-agnostic approaches. By treating the machine learning models as black-box functions, these approaches provide crucial flexibility in the choice of models, explanations, and representations, improving debugging, comparison, and interfaces for a variety of users and models. We also outline the main challenges for such methods, and review a recently-introduced model-agnostic explanation approach (LIME) that addresses these challenges.

* presented at 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York, NY 
Viaarxiv icon