Alert button
Picture for Alexis Ross

Alexis Ross

Alert button

Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks

Aug 01, 2023
Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim

The impressive performance of recent language models across a wide range of tasks suggests that they possess a degree of abstract reasoning skills. Are these skills general and transferable, or specialized to specific tasks seen during pretraining? To disentangle these effects, we propose an evaluation framework based on "counterfactual" task variants that deviate from the default assumptions underlying standard tasks. Across a suite of 11 tasks, we observe nontrivial performance on the counterfactual variants, but nevertheless find that performance substantially and consistently degrades compared to the default conditions. This suggests that while current LMs may possess abstract task-solving skills to a degree, they often also rely on narrow, non-transferable procedures for task-solving. These results motivate a more careful interpretation of language model performance that teases apart these aspects of behavior.

Viaarxiv icon

ARIES: A Corpus of Scientific Paper Edits Made in Response to Peer Reviews

Jun 21, 2023
Mike D'Arcy, Alexis Ross, Erin Bransom, Bailey Kuehl, Jonathan Bragg, Tom Hope, Doug Downey

Figure 1 for ARIES: A Corpus of Scientific Paper Edits Made in Response to Peer Reviews
Figure 2 for ARIES: A Corpus of Scientific Paper Edits Made in Response to Peer Reviews
Figure 3 for ARIES: A Corpus of Scientific Paper Edits Made in Response to Peer Reviews
Figure 4 for ARIES: A Corpus of Scientific Paper Edits Made in Response to Peer Reviews

Revising scientific papers based on peer feedback is a challenging task that requires not only deep scientific knowledge and reasoning, but also the ability to recognize the implicit requests in high-level feedback and to choose the best of many possible ways to update the manuscript in response. We introduce this task for large language models and release ARIES, a dataset of review comments and their corresponding paper edits, to enable training and evaluating models. We study two versions of the task: comment-edit alignment and edit generation, and evaluate several baselines, including GPT-4. We find that models struggle even to identify the edits that correspond to a comment, especially in cases where the comment is phrased in an indirect way or where the edit addresses the spirit of a comment but not the precise request. When tasked with generating edits, GPT-4 often succeeds in addressing comments on a surface level, but it rigidly follows the wording of the feedback rather than the underlying intent, and includes fewer technical details than human-written edits. We hope that our formalization, dataset, and analysis will form a foundation for future work in this area.

* 11 pages, 2 figures 
Viaarxiv icon

Inverse Scaling: When Bigger Isn't Better

Jun 15, 2023
Ian R. McKenzie, Alexander Lyzhov, Michael Pieler, Alicia Parrish, Aaron Mueller, Ameya Prabhu, Euan McLean, Aaron Kirtland, Alexis Ross, Alisa Liu, Andrew Gritsevskiy, Daniel Wurgaft, Derik Kauffman, Gabriel Recchia, Jiacheng Liu, Joe Cavanagh, Max Weiss, Sicong Huang, The Floating Droid, Tom Tseng, Tomasz Korbak, Xudong Shen, Yuhui Zhang, Zhengping Zhou, Najoung Kim, Samuel R. Bowman, Ethan Perez

Figure 1 for Inverse Scaling: When Bigger Isn't Better
Figure 2 for Inverse Scaling: When Bigger Isn't Better
Figure 3 for Inverse Scaling: When Bigger Isn't Better
Figure 4 for Inverse Scaling: When Bigger Isn't Better

Work on scaling laws has found that large language models (LMs) show predictable improvements to overall loss with increased scale (model size, training data, and compute). Here, we present evidence for the claim that LMs may show inverse scaling, or worse task performance with increased scale, e.g., due to flaws in the training objective and data. We present empirical evidence of inverse scaling on 11 datasets collected by running a public contest, the Inverse Scaling Prize, with a substantial prize pool. Through analysis of the datasets, along with other examples found in the literature, we identify four potential causes of inverse scaling: (i) preference to repeat memorized sequences over following in-context instructions, (ii) imitation of undesirable patterns in the training data, (iii) tasks containing an easy distractor task which LMs could focus on, rather than the harder real task, and (iv) correct but misleading few-shot demonstrations of the task. We release the winning datasets at https://inversescaling.com/data to allow for further investigation of inverse scaling. Our tasks have helped drive the discovery of U-shaped and inverted-U scaling trends, where an initial trend reverses, suggesting that scaling trends are less reliable at predicting the behavior of larger-scale models than previously understood. Overall, our results suggest that there are tasks for which increased model scale alone may not lead to progress, and that more careful thought needs to go into the data and objectives for training language models.

Viaarxiv icon

CREST: A Joint Framework for Rationalization and Counterfactual Text Generation

May 26, 2023
Marcos Treviso, Alexis Ross, Nuno M. Guerreiro, André F. T. Martins

Figure 1 for CREST: A Joint Framework for Rationalization and Counterfactual Text Generation
Figure 2 for CREST: A Joint Framework for Rationalization and Counterfactual Text Generation
Figure 3 for CREST: A Joint Framework for Rationalization and Counterfactual Text Generation
Figure 4 for CREST: A Joint Framework for Rationalization and Counterfactual Text Generation

Selective rationales and counterfactual examples have emerged as two effective, complementary classes of interpretability methods for analyzing and training NLP models. However, prior work has not explored how these methods can be integrated to combine their complementary advantages. We overcome this limitation by introducing CREST (ContRastive Edits with Sparse raTionalization), a joint framework for selective rationalization and counterfactual text generation, and show that this framework leads to improvements in counterfactual quality, model robustness, and interpretability. First, CREST generates valid counterfactuals that are more natural than those produced by previous methods, and subsequently can be used for data augmentation at scale, reducing the need for human-generated examples. Second, we introduce a new loss function that leverages CREST counterfactuals to regularize selective rationales and show that this regularization improves both model robustness and rationale quality, compared to methods that do not leverage CREST counterfactuals. Our results demonstrate that CREST successfully bridges the gap between selective rationales and counterfactual examples, addressing the limitations of existing methods and providing a more comprehensive view of a model's predictions.

* Accepted at ACL 2023 (main) 
Viaarxiv icon

Does Self-Rationalization Improve Robustness to Spurious Correlations?

Oct 24, 2022
Alexis Ross, Matthew E. Peters, Ana Marasović

Figure 1 for Does Self-Rationalization Improve Robustness to Spurious Correlations?
Figure 2 for Does Self-Rationalization Improve Robustness to Spurious Correlations?
Figure 3 for Does Self-Rationalization Improve Robustness to Spurious Correlations?
Figure 4 for Does Self-Rationalization Improve Robustness to Spurious Correlations?

Rationalization is fundamental to human reasoning and learning. NLP models trained to produce rationales along with predictions, called self-rationalization models, have been investigated for their interpretability and utility to end-users. However, the extent to which training with human-written rationales facilitates learning remains an under-explored question. We ask whether training models to self-rationalize can aid in their learning to solve tasks for the right reasons. Specifically, we evaluate how training self-rationalization models with free-text rationales affects robustness to spurious correlations in fine-tuned encoder-decoder and decoder-only models of six different sizes. We evaluate robustness to spurious correlations by measuring performance on 1) manually annotated challenge datasets and 2) subsets of original test sets where reliance on spurious correlations would fail to produce correct answers. We find that while self-rationalization can improve robustness to spurious correlations in low-resource settings, it tends to hurt robustness in higher-resource settings. Furthermore, these effects depend on model family and size, as well as on rationale content. Together, our results suggest that explainability can come at the cost of robustness; thus, appropriate care should be taken when training self-rationalizing models with the goal of creating more trustworthy models.

Viaarxiv icon

Tailor: Generating and Perturbing Text with Semantic Controls

Jul 15, 2021
Alexis Ross, Tongshuang Wu, Hao Peng, Matthew E. Peters, Matt Gardner

Figure 1 for Tailor: Generating and Perturbing Text with Semantic Controls
Figure 2 for Tailor: Generating and Perturbing Text with Semantic Controls
Figure 3 for Tailor: Generating and Perturbing Text with Semantic Controls
Figure 4 for Tailor: Generating and Perturbing Text with Semantic Controls

Making controlled perturbations is essential for various tasks (e.g., data augmentation), but building task-specific generators can be expensive. We introduce Tailor, a task-agnostic generation system that perturbs text in a semantically-controlled way. With unlikelihood training, we design Tailor's generator to follow a series of control codes derived from semantic roles. Through modifications of these control codes, Tailor can produce fine-grained perturbations. We implement a set of operations on control codes that can be composed into complex perturbation strategies, and demonstrate their effectiveness in three distinct applications: First, Tailor facilitates the construction of high-quality contrast sets that are lexically diverse, and less biased than original task test data. Second, paired with automated labeling heuristics, Tailor helps improve model generalization through data augmentation: We obtain an average gain of 1.73 on an NLI challenge set by perturbing just 5% of training data. Third, without any finetuning overhead, Tailor's perturbations effectively improve compositionality in fine-grained style transfer, outperforming fine-tuned baselines on 6 transfers.

Viaarxiv icon

Competency Problems: On Finding and Removing Artifacts in Language Data

Apr 17, 2021
Matt Gardner, William Merrill, Jesse Dodge, Matthew E. Peters, Alexis Ross, Sameer Singh, Noah Smith

Figure 1 for Competency Problems: On Finding and Removing Artifacts in Language Data
Figure 2 for Competency Problems: On Finding and Removing Artifacts in Language Data
Figure 3 for Competency Problems: On Finding and Removing Artifacts in Language Data
Figure 4 for Competency Problems: On Finding and Removing Artifacts in Language Data

Much recent work in NLP has documented dataset artifacts, bias, and spurious correlations between input features and output labels. However, how to tell which features have "spurious" instead of legitimate correlations is typically left unspecified. In this work we argue that for complex language understanding tasks, all simple feature correlations are spurious, and we formalize this notion into a class of problems which we call competency problems. For example, the word "amazing" on its own should not give information about a sentiment label independent of the context in which it appears, which could include negation, metaphor, sarcasm, etc. We theoretically analyze the difficulty of creating data for competency problems when human bias is taken into account, showing that realistic datasets will increasingly deviate from competency problems as dataset size increases. This analysis gives us a simple statistical test for dataset artifacts, which we use to show more subtle biases than were described in prior work, including demonstrating that models are inappropriately affected by these less extreme biases. Our theoretical treatment of this problem also allows us to analyze proposed solutions, such as making local edits to dataset instances, and to give recommendations for future data collection and model design efforts that target competency problems.

Viaarxiv icon

Explaining NLP Models via Minimal Contrastive Editing (MiCE)

Dec 27, 2020
Alexis Ross, Ana Marasović, Matthew E. Peters

Figure 1 for Explaining NLP Models via Minimal Contrastive Editing (MiCE)
Figure 2 for Explaining NLP Models via Minimal Contrastive Editing (MiCE)
Figure 3 for Explaining NLP Models via Minimal Contrastive Editing (MiCE)
Figure 4 for Explaining NLP Models via Minimal Contrastive Editing (MiCE)

Humans give contrastive explanations that explain why an observed event happened rather than some other counterfactual event (the contrast case). Despite the important role that contrastivity plays in how people generate and evaluate explanations, this property is largely missing from current methods for explaining NLP models. We present Minimal Contrastive Editing (MiCE), a method for generating contrastive explanations of model predictions in the form of edits to inputs that change model outputs to the contrast case. Our experiments across three tasks -- binary sentiment classification, topic classification, and multiple-choice question answering -- show that MiCE is able to produce edits that are not only contrastive, but also minimal and fluent, consistent with human contrastive edits. We demonstrate how MiCE edits can be used for two use cases in NLP system development -- uncovering dataset artifacts and debugging incorrect model predictions -- and thereby illustrate that generating contrastive explanations is a promising research direction for model interpretability.

Viaarxiv icon

Ensuring Actionable Recourse via Adversarial Training

Nov 12, 2020
Alexis Ross, Himabindu Lakkaraju, Osbert Bastani

Figure 1 for Ensuring Actionable Recourse via Adversarial Training
Figure 2 for Ensuring Actionable Recourse via Adversarial Training
Figure 3 for Ensuring Actionable Recourse via Adversarial Training
Figure 4 for Ensuring Actionable Recourse via Adversarial Training

As machine learning models are increasingly deployed in high-stakes domains such as legal and financial decision-making, there has been growing interest in post-hoc methods for generating counterfactual explanations. Such explanations provide individuals adversely impacted by predicted outcomes (e.g., an applicant denied a loan) with "recourse" ---i.e., a description of how they can change their features to obtain a positive outcome. We propose a novel algorithm that leverages adversarial training and PAC confidence sets to learn models that theoretically guarantee recourse to affected individuals with high probability without sacrificing accuracy. To the best of our knowledge, our approach is the first to learn models for which recourses are guaranteed with high probability. Extensive experimentation with real world datasets spanning various applications including recidivism prediction, bail outcomes, and lending demonstrate the efficacy of the proposed framework.

Viaarxiv icon