Alert button
Picture for Chih-Kuan Yeh

Chih-Kuan Yeh

Alert button

Sample based Explanations via Generalized Representers

Oct 27, 2023
Che-Ping Tsai, Chih-Kuan Yeh, Pradeep Ravikumar

Figure 1 for Sample based Explanations via Generalized Representers
Figure 2 for Sample based Explanations via Generalized Representers
Figure 3 for Sample based Explanations via Generalized Representers
Figure 4 for Sample based Explanations via Generalized Representers

We propose a general class of sample based explanations of machine learning models, which we term generalized representers. To measure the effect of a training sample on a model's test prediction, generalized representers use two components: a global sample importance that quantifies the importance of the training point to the model and is invariant to test samples, and a local sample importance that measures similarity between the training sample and the test point with a kernel. A key contribution of the paper is to show that generalized representers are the only class of sample based explanations satisfying a natural set of axiomatic properties. We discuss approaches to extract global importances given a kernel, and also natural choices of kernels given modern non-linear models. As we show, many popular existing sample based explanations could be cast as generalized representers with particular choices of kernels and approaches to extract global importances. Additionally, we conduct empirical comparisons of different generalized representers on two image and two text classification datasets.

* Accepted by Neurips 2023 
Viaarxiv icon

Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes

May 03, 2023
Cheng-Yu Hsieh, Chun-Liang Li, Chih-Kuan Yeh, Hootan Nakhost, Yasuhisa Fujii, Alexander Ratner, Ranjay Krishna, Chen-Yu Lee, Tomas Pfister

Figure 1 for Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes
Figure 2 for Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes
Figure 3 for Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes
Figure 4 for Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes

Deploying large language models (LLMs) is challenging because they are memory inefficient and compute-intensive for practical applications. In reaction, researchers train smaller task-specific models by either finetuning with human labels or distilling using LLM-generated labels. However, finetuning and distillation require large amounts of training data to achieve comparable performance to LLMs. We introduce Distilling step-by-step, a new mechanism that (a) trains smaller models that outperform LLMs, and (b) achieves so by leveraging less training data needed by finetuning or distillation. Our method extracts LLM rationales as additional supervision for small models within a multi-task training framework. We present three findings across 4 NLP benchmarks: First, compared to both finetuning and distillation, our mechanism achieves better performance with much fewer labeled/unlabeled training examples. Second, compared to LLMs, we achieve better performance using substantially smaller model sizes. Third, we reduce both the model size and the amount of data required to outperform LLMs; our 770M T5 model outperforms the 540B PaLM model using only 80% of available data on a benchmark task.

* Accepted to Findings of ACL 2023 
Viaarxiv icon

Concept Gradient: Concept-based Interpretation Without Linear Assumption

Aug 31, 2022
Andrew Bai, Chih-Kuan Yeh, Pradeep Ravikumar, Neil Y. C. Lin, Cho-Jui Hsieh

Figure 1 for Concept Gradient: Concept-based Interpretation Without Linear Assumption
Figure 2 for Concept Gradient: Concept-based Interpretation Without Linear Assumption
Figure 3 for Concept Gradient: Concept-based Interpretation Without Linear Assumption
Figure 4 for Concept Gradient: Concept-based Interpretation Without Linear Assumption

Concept-based interpretations of black-box models are often more intuitive for humans to understand. The most widely adopted approach for concept-based interpretation is Concept Activation Vector (CAV). CAV relies on learning a linear relation between some latent representation of a given model and concepts. The linear separability is usually implicitly assumed but does not hold true in general. In this work, we started from the original intent of concept-based interpretation and proposed Concept Gradient (CG), extending concept-based interpretation beyond linear concept functions. We showed that for a general (potentially non-linear) concept, we can mathematically evaluate how a small change of concept affecting the model's prediction, which leads to an extension of gradient-based interpretation to the concept space. We demonstrated empirically that CG outperforms CAV in both toy examples and real world datasets.

Viaarxiv icon

Faith-Shap: The Faithful Shapley Interaction Index

Mar 09, 2022
Che-Ping Tsai, Chih-Kuan Yeh, Pradeep Ravikumar

Figure 1 for Faith-Shap: The Faithful Shapley Interaction Index
Figure 2 for Faith-Shap: The Faithful Shapley Interaction Index
Figure 3 for Faith-Shap: The Faithful Shapley Interaction Index
Figure 4 for Faith-Shap: The Faithful Shapley Interaction Index

Shapley values, which were originally designed to assign attributions to individual players in coalition games, have become a commonly used approach in explainable machine learning to provide attributions to input features for black-box machine learning models. A key attraction of Shapley values is that they uniquely satisfy a very natural set of axiomatic properties. However, extending the Shapley value to assigning attributions to interactions rather than individual players, an interaction index, is non-trivial: as the natural set of axioms for the original Shapley values, extended to the context of interactions, no longer specify a unique interaction index. Many proposals thus introduce additional less "natural" axioms, while sacrificing the key axiom of efficiency, in order to obtain unique interaction indices. In this work, rather than introduce additional conflicting axioms, we adopt the viewpoint of Shapley values as coefficients of the most faithful linear approximation to the pseudo-Boolean coalition game value function. By extending linear to $\ell$-order polynomial approximations, we can then define the general family of faithful interaction indices}. We show that by additionally requiring the faithful interaction indices to satisfy interaction-extensions of the standard individual Shapley axioms (dummy, symmetry, linearity, and efficiency), we obtain a unique FaithfulShapley Interaction index, which we denote Faith-Shap, as a natural generalization of the Shapley value to interactions. We then provide some illustrative contrasts of Faith-Shap with previously proposed interaction indices, and further investigate some of its interesting algebraic properties. We further show the computational efficiency of computing Faith-Shap, together with some additional qualitative insights, via some illustrative experiments.

Viaarxiv icon

Human-Centered Concept Explanations for Neural Networks

Feb 25, 2022
Chih-Kuan Yeh, Been Kim, Pradeep Ravikumar

Figure 1 for Human-Centered Concept Explanations for Neural Networks

Understanding complex machine learning models such as deep neural networks with explanations is crucial in various applications. Many explanations stem from the model perspective, and may not necessarily effectively communicate why the model is making its predictions at the right level of abstraction. For example, providing importance weights to individual pixels in an image can only express which parts of that particular image are important to the model, but humans may prefer an explanation which explains the prediction by concept-based thinking. In this work, we review the emerging area of concept based explanations. We start by introducing concept explanations including the class of Concept Activation Vectors (CAV) which characterize concepts using vectors in appropriate spaces of neural activations, and discuss different properties of useful concepts, and approaches to measure the usefulness of concept vectors. We then discuss approaches to automatically extract concepts, and approaches to address some of their caveats. Finally, we discuss some case studies that showcase the utility of such concept-based explanations in synthetic settings and real world applications.

* book chapter of Neuro-Symbolic Artificial Intelligence: The State of the Art, volume: 342, p.337 - 352, 2022 
Viaarxiv icon

Threading the Needle of On and Off-Manifold Value Functions for Shapley Explanations

Feb 24, 2022
Chih-Kuan Yeh, Kuan-Yun Lee, Frederick Liu, Pradeep Ravikumar

Figure 1 for Threading the Needle of On and Off-Manifold Value Functions for Shapley Explanations
Figure 2 for Threading the Needle of On and Off-Manifold Value Functions for Shapley Explanations
Figure 3 for Threading the Needle of On and Off-Manifold Value Functions for Shapley Explanations
Figure 4 for Threading the Needle of On and Off-Manifold Value Functions for Shapley Explanations

A popular explainable AI (XAI) approach to quantify feature importance of a given model is via Shapley values. These Shapley values arose in cooperative games, and hence a critical ingredient to compute these in an XAI context is a so-called value function, that computes the "value" of a subset of features, and which connects machine learning models to cooperative games. There are many possible choices for such value functions, which broadly fall into two categories: on-manifold and off-manifold value functions, which take an observational and an interventional viewpoint respectively. Both these classes however have their respective flaws, where on-manifold value functions violate key axiomatic properties and are computationally expensive, while off-manifold value functions pay less heed to the data manifold and evaluate the model on regions for which it wasn't trained. Thus, there is no consensus on which class of value functions to use. In this paper, we show that in addition to these existing issues, both classes of value functions are prone to adversarial manipulations on low density regions. We formalize the desiderata of value functions that respect both the model and the data manifold in a set of axioms and are robust to perturbation on off-manifold regions, and show that there exists a unique value function that satisfies these axioms, which we term the Joint Baseline value function, and the resulting Shapley value the Joint Baseline Shapley (JBshap), and validate the effectiveness of JBshap in experiments.

* AISTATS 2022 
Viaarxiv icon

First is Better Than Last for Training Data Influence

Feb 24, 2022
Chih-Kuan Yeh, Ankur Taly, Mukund Sundararajan, Frederick Liu, Pradeep Ravikumar

Figure 1 for First is Better Than Last for Training Data Influence
Figure 2 for First is Better Than Last for Training Data Influence
Figure 3 for First is Better Than Last for Training Data Influence
Figure 4 for First is Better Than Last for Training Data Influence

The ability to identify influential training examples enables us to debug training data and explain model behavior. Existing techniques are based on the flow of influence through the model parameters. For large models in NLP applications, it is often computationally infeasible to study this flow through all model parameters, therefore techniques usually pick the last layer of weights. Our first observation is that for classification problems, the last layer is reductive and does not encode sufficient input level information. Deleting influential examples, according to this measure, typically does not change the model's behavior much. We propose a technique called TracIn-WE that modifies a method called TracIn to operate on the word embedding layer instead of the last layer. This could potentially have the opposite concern, that the word embedding layer does not encode sufficient high level information. However, we find that gradients (unlike embeddings) do not suffer from this, possibly because they chain through higher layers. We show that TracIn-WE significantly outperforms other data influence methods applied on the last layer by 4-10 times on the case deletion evaluation on three language classification tasks. In addition, TracIn-WE can produce scores not just at the training data level, but at the word training data level, a further aid in debugging.

Viaarxiv icon

Evaluations and Methods for Explanation through Robustness Analysis

May 31, 2020
Cheng-Yu Hsieh, Chih-Kuan Yeh, Xuanqing Liu, Pradeep Ravikumar, Seungyeon Kim, Sanjiv Kumar, Cho-Jui Hsieh

Figure 1 for Evaluations and Methods for Explanation through Robustness Analysis
Figure 2 for Evaluations and Methods for Explanation through Robustness Analysis
Figure 3 for Evaluations and Methods for Explanation through Robustness Analysis
Figure 4 for Evaluations and Methods for Explanation through Robustness Analysis

Among multiple ways of interpreting a machine learning model, measuring the importance of a set of features tied to a prediction is probably one of the most intuitive ways to explain a model. In this paper, we establish the link between a set of features to a prediction with a new evaluation criterion, robustness analysis, which measures the minimum distortion distance of adversarial perturbation. By measuring the tolerance level for an adversarial attack, we can extract a set of features that provides the most robust support for a prediction, and also can extract a set of features that contrasts the current prediction to a target class by setting a targeted adversarial attack. By applying this methodology to various prediction tasks across multiple domains, we observe the derived explanations are indeed capturing the significant feature set qualitatively and quantitatively.

Viaarxiv icon

Minimizing FLOPs to Learn Efficient Sparse Representations

Apr 12, 2020
Biswajit Paria, Chih-Kuan Yeh, Ian E. H. Yen, Ning Xu, Pradeep Ravikumar, Barnabás Póczos

Figure 1 for Minimizing FLOPs to Learn Efficient Sparse Representations
Figure 2 for Minimizing FLOPs to Learn Efficient Sparse Representations
Figure 3 for Minimizing FLOPs to Learn Efficient Sparse Representations
Figure 4 for Minimizing FLOPs to Learn Efficient Sparse Representations

Deep representation learning has become one of the most widely adopted approaches for visual search, recommendation, and identification. Retrieval of such representations from a large database is however computationally challenging. Approximate methods based on learning compact representations, have been widely explored for this problem, such as locality sensitive hashing, product quantization, and PCA. In this work, in contrast to learning compact representations, we propose to learn high dimensional and sparse representations that have similar representational capacity as dense embeddings while being more efficient due to sparse matrix multiplication operations which can be much faster than dense multiplication. Following the key insight that the number of operations decreases quadratically with the sparsity of embeddings provided the non-zero entries are distributed uniformly across dimensions, we propose a novel approach to learn such distributed sparse embeddings via the use of a carefully constructed regularization function that directly minimizes a continuous relaxation of the number of floating-point operations (FLOPs) incurred during retrieval. Our experiments show that our approach is competitive to the other baselines and yields a similar or better speed-vs-accuracy tradeoff on practical datasets.

* Published at ICLR 2020 
Viaarxiv icon

On Concept-Based Explanations in Deep Neural Networks

Oct 17, 2019
Chih-Kuan Yeh, Been Kim, Sercan O. Arik, Chun-Liang Li, Pradeep Ravikumar, Tomas Pfister

Figure 1 for On Concept-Based Explanations in Deep Neural Networks
Figure 2 for On Concept-Based Explanations in Deep Neural Networks
Figure 3 for On Concept-Based Explanations in Deep Neural Networks
Figure 4 for On Concept-Based Explanations in Deep Neural Networks

Deep neural networks (DNNs) build high-level intelligence on low-level raw features. Understanding of this high-level intelligence can be enabled by deciphering the concepts they base their decisions on, as human-level thinking. In this paper, we study concept-based explainability for DNNs in a systematic framework. First, we define the notion of completeness, which quantifies how sufficient a particular set of concepts is in explaining a model's prediction behavior. Based on performance and variability motivations, we propose two definitions to quantify completeness. We show that under degenerate conditions, our method is equivalent to Principal Component Analysis. Next, we propose a concept discovery method that considers two additional constraints to encourage the interpretability of the discovered concepts. We use game-theoretic notions to aggregate over sets to define an importance score for each discovered concept, which we call ConceptSHAP. On specifically-designed synthetic datasets and real-world text and image datasets, we validate the effectiveness of our framework in finding concepts that are complete in explaining the decision, and interpretable.

Viaarxiv icon