Alert button
Picture for Hanjie Chen

Hanjie Chen

Alert button

Explainability for Large Language Models: A Survey

Sep 17, 2023
Haiyan Zhao, Hanjie Chen, Fan Yang, Ninghao Liu, Huiqi Deng, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, Mengnan Du

Figure 1 for Explainability for Large Language Models: A Survey
Figure 2 for Explainability for Large Language Models: A Survey
Figure 3 for Explainability for Large Language Models: A Survey
Figure 4 for Explainability for Large Language Models: A Survey

Large language models (LLMs) have demonstrated impressive capabilities in natural language processing. However, their internal mechanisms are still unclear and this lack of transparency poses unwanted risks for downstream applications. Therefore, understanding and explaining these models is crucial for elucidating their behaviors, limitations, and social impacts. In this paper, we introduce a taxonomy of explainability techniques and provide a structured overview of methods for explaining Transformer-based language models. We categorize techniques based on the training paradigms of LLMs: traditional fine-tuning-based paradigm and prompting-based paradigm. For each paradigm, we summarize the goals and dominant approaches for generating local explanations of individual predictions and global explanations of overall model knowledge. We also discuss metrics for evaluating generated explanations, and discuss how explanations can be leveraged to debug models and improve performance. Lastly, we examine key challenges and emerging opportunities for explanation techniques in the era of LLMs in comparison to conventional machine learning models.

Viaarxiv icon

Improving Interpretability via Explicit Word Interaction Graph Layer

Feb 03, 2023
Arshdeep Sekhon, Hanjie Chen, Aman Shrivastava, Zhe Wang, Yangfeng Ji, Yanjun Qi

Figure 1 for Improving Interpretability via Explicit Word Interaction Graph Layer
Figure 2 for Improving Interpretability via Explicit Word Interaction Graph Layer
Figure 3 for Improving Interpretability via Explicit Word Interaction Graph Layer
Figure 4 for Improving Interpretability via Explicit Word Interaction Graph Layer

Recent NLP literature has seen growing interest in improving model interpretability. Along this direction, we propose a trainable neural network layer that learns a global interaction graph between words and then selects more informative words using the learned word interactions. Our layer, we call WIGRAPH, can plug into any neural network-based NLP text classifiers right after its word embedding layer. Across multiple SOTA NLP models and various NLP datasets, we demonstrate that adding the WIGRAPH layer substantially improves NLP models' interpretability and enhances models' prediction performance at the same time.

* AAAI 2023  
* 15 pages, AAAI 2023 
Viaarxiv icon

KNIFE: Knowledge Distillation with Free-Text Rationales

Dec 19, 2022
Aaron Chan, Zhiyuan Zeng, Wyatt Lake, Brihi Joshi, Hanjie Chen, Xiang Ren

Figure 1 for KNIFE: Knowledge Distillation with Free-Text Rationales
Figure 2 for KNIFE: Knowledge Distillation with Free-Text Rationales
Figure 3 for KNIFE: Knowledge Distillation with Free-Text Rationales
Figure 4 for KNIFE: Knowledge Distillation with Free-Text Rationales

Free-text rationales (FTRs) follow how humans communicate by explaining reasoning processes via natural language. A number of recent works have studied how to improve language model (LM) generalization by using FTRs to teach LMs the correct reasoning processes behind correct task outputs. These prior works aim to learn from FTRs by appending them to the LM input or target output, but this may introduce an input distribution shift or conflict with the task objective, respectively. We propose KNIFE, which distills FTR knowledge from an FTR-augmented teacher LM (takes both task input and FTR) to a student LM (takes only task input), which is used for inference. Crucially, the teacher LM's forward computation has a bottleneck stage in which all of its FTR states are masked out, which pushes knowledge from the FTR states into the task input/output states. Then, FTR knowledge is distilled to the student LM by training its task input/output states to align with the teacher LM's. On two question answering datasets, we show that KNIFE significantly outperforms existing FTR learning methods, in both fully-supervised and low-resource settings.

* 14 pages, 7 figures 
Viaarxiv icon

Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification

Dec 10, 2022
Ruixuan Tang, Hanjie Chen, Yangfeng Ji

Figure 1 for Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification
Figure 2 for Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification
Figure 3 for Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification
Figure 4 for Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification

Some recent works observed the instability of post-hoc explanations when input side perturbations are applied to the model. This raises the interest and concern in the stability of post-hoc explanations. However, the remaining question is: is the instability caused by the neural network model or the post-hoc explanation method? This work explores the potential source that leads to unstable post-hoc explanations. To separate the influence from the model, we propose a simple output probability perturbation method. Compared to prior input side perturbation methods, the output probability perturbation method can circumvent the neural model's potential effect on the explanations and allow the analysis on the explanation method. We evaluate the proposed method with three widely-used post-hoc explanation methods (LIME (Ribeiro et al., 2016), Kernel Shapley (Lundberg and Lee, 2017a), and Sample Shapley (Strumbelj and Kononenko, 2010)). The results demonstrate that the post-hoc methods are stable, barely producing discrepant explanations under output probability perturbations. The observation suggests that neural network models may be the primary source of fragile explanations.

* EMNLP BlackboxNLP 2022 
Viaarxiv icon

REV: Information-Theoretic Evaluation of Free-Text Rationales

Oct 10, 2022
Hanjie Chen, Faeze Brahman, Xiang Ren, Yangfeng Ji, Yejin Choi, Swabha Swayamdipta

Figure 1 for REV: Information-Theoretic Evaluation of Free-Text Rationales
Figure 2 for REV: Information-Theoretic Evaluation of Free-Text Rationales
Figure 3 for REV: Information-Theoretic Evaluation of Free-Text Rationales
Figure 4 for REV: Information-Theoretic Evaluation of Free-Text Rationales

Free-text rationales are a promising step towards explainable AI, yet their evaluation remains an open research problem. While existing metrics have mostly focused on measuring the direct association between the rationale and a given label, we argue that an ideal metric should also be able to focus on the new information uniquely provided in the rationale that is otherwise not provided in the input or the label. We investigate this research problem from an information-theoretic perspective using the conditional V-information. More concretely, we propose a metric called REV (Rationale Evaluation with conditional V-information), that can quantify the new information in a rationale supporting a given label beyond the information already available in the input or the label. Experiments on reasoning tasks across four benchmarks, including few-shot prompting with GPT-3, demonstrate the effectiveness of REV in evaluating different types of rationale-label pairs, compared to existing metrics. Through several quantitative comparisons, we demonstrate the capability of REV in providing more sensitive measurements of new information in free-text rationales with respect to a label. Furthermore, REV is consistent with human judgments on rationale evaluations. Overall, when used alongside traditional performance metrics, REV provides deeper insights into a models' reasoning and prediction processes.

Viaarxiv icon

Self-augmented Data Selection for Few-shot Dialogue Generation

May 19, 2022
Wanyu Du, Hanjie Chen, Yangfeng Ji

Figure 1 for Self-augmented Data Selection for Few-shot Dialogue Generation
Figure 2 for Self-augmented Data Selection for Few-shot Dialogue Generation
Figure 3 for Self-augmented Data Selection for Few-shot Dialogue Generation
Figure 4 for Self-augmented Data Selection for Few-shot Dialogue Generation

The natural language generation (NLG) module in task-oriented dialogue systems translates structured meaning representations (MRs) into text responses, which has a great impact on users' experience as the human-machine interaction interface. However, in practice, developers often only have a few well-annotated data and confront a high data collection cost to build the NLG module. In this work, we adopt the self-training framework to deal with the few-shot MR-to-Text generation problem. We leverage the pre-trained language model to self-augment many pseudo-labeled data. To prevent the gradual drift from target data distribution to noisy augmented data distribution, we propose a novel data selection strategy to select the data that our generation model is most uncertain about. Compared with existing data selection methods, our method is: (1) parameter-efficient, which does not require training any additional neural models, (2) computation-efficient, which only needs to apply several stochastic forward passes of the model to estimate the uncertainty. We conduct empirical experiments on two benchmark datasets: FewShotWOZ and FewShotSGD, and show that our proposed framework consistently outperforms other baselines in terms of BLEU and ERR.

Viaarxiv icon

Pathologies of Pre-trained Language Models in Few-shot Fine-tuning

Apr 17, 2022
Hanjie Chen, Guoqing Zheng, Ahmed Hassan Awadallah, Yangfeng Ji

Figure 1 for Pathologies of Pre-trained Language Models in Few-shot Fine-tuning
Figure 2 for Pathologies of Pre-trained Language Models in Few-shot Fine-tuning
Figure 3 for Pathologies of Pre-trained Language Models in Few-shot Fine-tuning
Figure 4 for Pathologies of Pre-trained Language Models in Few-shot Fine-tuning

Although adapting pre-trained language models with few examples has shown promising performance on text classification, there is a lack of understanding of where the performance gain comes from. In this work, we propose to answer this question by interpreting the adaptation behavior using post-hoc explanations from model predictions. By modeling feature statistics of explanations, we discover that (1) without fine-tuning, pre-trained models (e.g. BERT and RoBERTa) show strong prediction bias across labels; (2) although few-shot fine-tuning can mitigate the prediction bias and demonstrate promising prediction performance, our analysis shows models gain performance improvement by capturing non-task-related features (e.g. stop words) or shallow data patterns (e.g. lexical overlaps). These observations alert that pursuing model performance with fewer examples may incur pathological prediction behavior, which requires further sanity check on model predictions and careful design in model evaluations in few-shot fine-tuning.

* ACL 2022 Workshop on Insights from Negative Results in NLP 
Viaarxiv icon

Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation

Mar 23, 2022
Hanjie Chen, Yangfeng Ji

Figure 1 for Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation
Figure 2 for Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation
Figure 3 for Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation
Figure 4 for Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation

Neural language models show vulnerability to adversarial examples which are semantically similar to their original counterparts with a few words replaced by their synonyms. A common way to improve model robustness is adversarial training which follows two steps-collecting adversarial examples by attacking a target model, and fine-tuning the model on the augmented dataset with these adversarial examples. The objective of traditional adversarial training is to make a model produce the same correct predictions on an original/adversarial example pair. However, the consistency between model decision-makings on two similar texts is ignored. We argue that a robust model should behave consistently on original/adversarial example pairs, that is making the same predictions (what) based on the same reasons (how) which can be reflected by consistent interpretations. In this work, we propose a novel feature-level adversarial training method named FLAT. FLAT aims at improving model robustness in terms of both predictions and interpretations. FLAT incorporates variational word masks in neural networks to learn global word importance and play as a bottleneck teaching the model to make predictions based on important words. FLAT explicitly shoots at the vulnerability problem caused by the mismatch between model understandings on the replaced words and their synonyms in original/adversarial example pairs by regularizing the corresponding global word importance scores. Experiments show the effectiveness of FLAT in improving the robustness with respect to both predictions and interpretations of four neural network models (LSTM, CNN, BERT, and DeBERTa) to two adversarial attacks on four text classification tasks. The models trained via FLAT also show better robustness than baseline models on unforeseen adversarial examples across different attacks.

* AAAI 2022 
Viaarxiv icon

Explaining Prediction Uncertainty of Pre-trained Language Models by Detecting Uncertain Words in Inputs

Jan 11, 2022
Hanjie Chen, Yangfeng Ji

Figure 1 for Explaining Prediction Uncertainty of Pre-trained Language Models by Detecting Uncertain Words in Inputs
Figure 2 for Explaining Prediction Uncertainty of Pre-trained Language Models by Detecting Uncertain Words in Inputs
Figure 3 for Explaining Prediction Uncertainty of Pre-trained Language Models by Detecting Uncertain Words in Inputs
Figure 4 for Explaining Prediction Uncertainty of Pre-trained Language Models by Detecting Uncertain Words in Inputs

Estimating the predictive uncertainty of pre-trained language models is important for increasing their trustworthiness in NLP. Although many previous works focus on quantifying prediction uncertainty, there is little work on explaining the uncertainty. This paper pushes a step further on explaining uncertain predictions of post-calibrated pre-trained language models. We adapt two perturbation-based post-hoc interpretation methods, Leave-one-out and Sampling Shapley, to identify words in inputs that cause the uncertainty in predictions. We test the proposed methods on BERT and RoBERTa with three tasks: sentiment classification, natural language inference, and paraphrase identification, in both in-domain and out-of-domain settings. Experiments show that both methods consistently capture words in inputs that cause prediction uncertainty.

Viaarxiv icon