Alert button
Picture for Vivek Srikumar

Vivek Srikumar

Alert button

Whispers of Doubt Amidst Echoes of Triumph in NLP Robustness

Nov 16, 2023
Ashim Gupta, Rishanth Rajendhran, Nathan Stringham, Vivek Srikumar, Ana Marasović

Are the longstanding robustness issues in NLP resolved by today's larger and more performant models? To address this question, we conduct a thorough investigation using 19 models of different sizes spanning different architectural choices and pretraining objectives. We conduct evaluations using (a) OOD and challenge test sets, (b) CheckLists, (c) contrast sets, and (d) adversarial inputs. Our analysis reveals that not all OOD tests provide further insight into robustness. Evaluating with CheckLists and contrast sets shows significant gaps in model performance; merely scaling models does not make them sufficiently robust. Finally, we point out that current approaches for adversarial evaluations of models are themselves problematic: they can be easily thwarted, and in their current forms, do not represent a sufficiently deep probe of model robustness. We conclude that not only is the question of robustness in NLP as yet unresolved, but even some of the approaches to measure robustness need to be reassessed.

Viaarxiv icon

Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals

Nov 16, 2023
Yanai Elazar, Bhargavi Paranjape, Hao Peng, Sarah Wiegreffe, Khyathi Raghavi, Vivek Srikumar, Sameer Singh, Noah A. Smith

The inevitable appearance of spurious correlations in training datasets hurts the generalization of NLP models on unseen data. Previous work has found that datasets with paired inputs are prone to correlations between a specific part of the input (e.g., the hypothesis in NLI) and the label; consequently, models trained only on those outperform chance. Are these correlations picked up by models trained on the full input data? To address this question, we propose a new evaluation method, Counterfactual Attentiveness Test (CAT). CAT uses counterfactuals by replacing part of the input with its counterpart from a different example (subject to some restrictions), expecting an attentive model to change its prediction. Using CAT, we systematically investigate established supervised and in-context learning models on ten datasets spanning four tasks: natural language inference, reading comprehension, paraphrase detection, and visual & language reasoning. CAT reveals that reliance on such correlations is mainly data-dependent. Surprisingly, we find that GPT3 becomes less attentive with an increased number of demonstrations, while its accuracy on the test data improves. Our results demonstrate that augmenting training or demonstration data with counterfactuals is effective in improving models' attentiveness. We show that models' attentiveness measured by CAT reveals different conclusions from solely measuring correlations in data.

Viaarxiv icon

TempTabQA: Temporal Question Answering for Semi-Structured Tables

Nov 14, 2023
Vivek Gupta, Pranshu Kandoi, Mahek Bhavesh Vora, Shuo Zhang, Yujie He, Ridho Reinanda, Vivek Srikumar

Semi-structured data, such as Infobox tables, often include temporal information about entities, either implicitly or explicitly. Can current NLP systems reason about such information in semi-structured tables? To tackle this question, we introduce the task of temporal question answering on semi-structured tables. We present a dataset, TempTabQA, which comprises 11,454 question-answer pairs extracted from 1,208 Wikipedia Infobox tables spanning more than 90 distinct domains. Using this dataset, we evaluate several state-of-the-art models for temporal reasoning. We observe that even the top-performing LLMs lag behind human performance by more than 13.5 F1 points. Given these results, our dataset has the potential to serve as a challenging benchmark to improve the temporal reasoning capabilities of NLP models.

* EMNLP 2023(Main), 23 Figures, 32 Tables 
Viaarxiv icon

The Integer Linear Programming Inference Cookbook

Jun 30, 2023
Vivek Srikumar, Dan Roth

Figure 1 for The Integer Linear Programming Inference Cookbook
Figure 2 for The Integer Linear Programming Inference Cookbook
Figure 3 for The Integer Linear Programming Inference Cookbook
Figure 4 for The Integer Linear Programming Inference Cookbook

Over the years, integer linear programs have been employed to model inference in many natural language processing problems. This survey is meant to guide the reader through the process of framing a new inference problem as an instance of an integer linear program and is structured as a collection of recipes. At the end, we will see two worked examples to illustrate the use of these recipes.

Viaarxiv icon

Don't Retrain, Just Rewrite: Countering Adversarial Perturbations by Rewriting Text

May 25, 2023
Ashim Gupta, Carter Wood Blum, Temma Choji, Yingjie Fei, Shalin Shah, Alakananda Vempala, Vivek Srikumar

Figure 1 for Don't Retrain, Just Rewrite: Countering Adversarial Perturbations by Rewriting Text
Figure 2 for Don't Retrain, Just Rewrite: Countering Adversarial Perturbations by Rewriting Text
Figure 3 for Don't Retrain, Just Rewrite: Countering Adversarial Perturbations by Rewriting Text
Figure 4 for Don't Retrain, Just Rewrite: Countering Adversarial Perturbations by Rewriting Text

Can language models transform inputs to protect text classifiers against adversarial attacks? In this work, we present ATINTER, a model that intercepts and learns to rewrite adversarial inputs to make them non-adversarial for a downstream text classifier. Our experiments on four datasets and five attack mechanisms reveal that ATINTER is effective at providing better adversarial robustness than existing defense approaches, without compromising task accuracy. For example, on sentiment classification using the SST-2 dataset, our method improves the adversarial accuracy over the best existing defense approach by more than 4% with a smaller decrease in task accuracy (0.5% vs 2.5%). Moreover, we show that ATINTER generalizes across multiple downstream tasks and classifiers without having to explicitly retrain it for those settings. Specifically, we find that when ATINTER is trained to remove adversarial perturbations for the sentiment classification task on the SST-2 dataset, it even transfers to a semantically different task of news classification (on AGNews) and improves the adversarial robustness by more than 10%.

* Accepted to ACL 2023 
Viaarxiv icon

Learning Semantic Role Labeling from Compatible Label Sequences

May 24, 2023
Tao Li, Ghazaleh Kazeminejad, Susan W. Brown, Martha Palmer, Vivek Srikumar

Figure 1 for Learning Semantic Role Labeling from Compatible Label Sequences
Figure 2 for Learning Semantic Role Labeling from Compatible Label Sequences
Figure 3 for Learning Semantic Role Labeling from Compatible Label Sequences
Figure 4 for Learning Semantic Role Labeling from Compatible Label Sequences

This paper addresses the question of how to efficiently learn from disjoint, compatible label sequences. We argue that the compatible structures between disjoint label sets help model learning and inference. We verify this hypothesis on the task of semantic role labeling (SRL), specifically, tagging a sentence with two role sequences: VerbNet arguments and PropBank arguments. Prior work has shown that cross-task interaction improves performance. However, the two tasks are still separately decoded, running the risk of generating structurally inconsistent label sequences (as per lexicons like SEMLINK). To eliminate this issue, we first propose a simple and effective setup that jointly handles VerbNet and PropBank labels as one sequence. With this setup, we show that enforcing SEMLINK constraints during decoding constantly improves the overall F1. With special input constructions, our joint model infers VerbNet arguments from PropBank arguments with over 99% accuracy. We also propose a constrained marginal model that uses SEMLINK information during training to further benefit from the large amounts of PropBank-only data. Our models achieve state-of-the-art F1's on VerbNet and PropBank argument labeling on the CoNLL05 dataset with strong out-of-domain generalization.

Viaarxiv icon

An In-depth Investigation of User Response Simulation for Conversational Search

Apr 17, 2023
Zhenduo Wang, Zhichao Xu, Qingyao Ai, Vivek Srikumar

Figure 1 for An In-depth Investigation of User Response Simulation for Conversational Search
Figure 2 for An In-depth Investigation of User Response Simulation for Conversational Search
Figure 3 for An In-depth Investigation of User Response Simulation for Conversational Search
Figure 4 for An In-depth Investigation of User Response Simulation for Conversational Search

Conversational search has seen increased recent attention in both the IR and NLP communities. It seeks to clarify and solve a user's search need through multi-turn natural language interactions. However, most existing systems are trained and demonstrated with recorded or artificial conversation logs. Eventually, conversational search systems should be trained, evaluated, and deployed in an open-ended setting with unseen conversation trajectories. A key challenge is that training and evaluating such systems both require a human-in-the-loop, which is expensive and does not scale. One strategy for this is to simulate users, thereby reducing the scaling costs. However, current user simulators are either limited to only respond to yes-no questions from the conversational search system, or unable to produce high quality responses in general. In this paper, we show that current state-of-the-art user simulation system could be significantly improved by replacing it with a smaller but advanced natural language generation model. But rather than merely reporting this new state-of-the-art, we present an in-depth investigation of the task of simulating user response for conversational search. Our goal is to supplement existing works with an insightful hand-analysis of what challenges are still unsolved by the advanced model, as well as to propose our solutions for them. The challenges we identified include (1) dataset noise, (2) a blind spot that is difficult for existing models to learn, and (3) a specific type of misevaluation in the standard empirical setup. Except for the dataset noise issue, we propose solutions to cover the training blind spot and to avoid the misevaluation. Our proposed solutions lead to further improvements. Our best system improves the previous state-of-the-art significantly.

* 9 pages 
Viaarxiv icon

Reinforced Clarification Question Generation with Defeasibility Rewards for Disambiguating Social and Moral Situations

Dec 20, 2022
Valentina Pyatkin, Jena D. Hwang, Vivek Srikumar, Ximing Lu, Liwei Jiang, Yejin Choi, Chandra Bhagavatula

Figure 1 for Reinforced Clarification Question Generation with Defeasibility Rewards for Disambiguating Social and Moral Situations
Figure 2 for Reinforced Clarification Question Generation with Defeasibility Rewards for Disambiguating Social and Moral Situations
Figure 3 for Reinforced Clarification Question Generation with Defeasibility Rewards for Disambiguating Social and Moral Situations
Figure 4 for Reinforced Clarification Question Generation with Defeasibility Rewards for Disambiguating Social and Moral Situations

Context is vital for commonsense moral reasoning. "Lying to a friend" is wrong if it is meant to deceive them, but may be morally okay if it is intended to protect them. Such nuanced but salient contextual information can potentially flip the moral judgment of an action. Thus, we present ClarifyDelphi, an interactive system that elicits missing contexts of a moral situation by generating clarification questions such as "Why did you lie to your friend?". Our approach is inspired by the observation that questions whose potential answers lead to diverging moral judgments are the most informative. We learn to generate questions using Reinforcement Learning, by maximizing the divergence between moral judgements of hypothetical answers to a question. Human evaluation shows that our system generates more relevant, informative and defeasible questions compared to other question generation baselines. ClarifyDelphi assists informed moral reasoning processes by seeking additional morally consequential context to disambiguate social and moral situations.

* 9 pages + bibliography + appendix 
Viaarxiv icon

AGRO: Adversarial Discovery of Error-prone groups for Robust Optimization

Dec 08, 2022
Bhargavi Paranjape, Pradeep Dasigi, Vivek Srikumar, Luke Zettlemoyer, Hannaneh Hajishirzi

Figure 1 for AGRO: Adversarial Discovery of Error-prone groups for Robust Optimization
Figure 2 for AGRO: Adversarial Discovery of Error-prone groups for Robust Optimization
Figure 3 for AGRO: Adversarial Discovery of Error-prone groups for Robust Optimization
Figure 4 for AGRO: Adversarial Discovery of Error-prone groups for Robust Optimization

Models trained via empirical risk minimization (ERM) are known to rely on spurious correlations between labels and task-independent input features, resulting in poor generalization to distributional shifts. Group distributionally robust optimization (G-DRO) can alleviate this problem by minimizing the worst-case loss over a set of pre-defined groups over training data. G-DRO successfully improves performance of the worst-group, where the correlation does not hold. However, G-DRO assumes that the spurious correlations and associated worst groups are known in advance, making it challenging to apply it to new tasks with potentially multiple unknown spurious correlations. We propose AGRO -- Adversarial Group discovery for Distributionally Robust Optimization -- an end-to-end approach that jointly identifies error-prone groups and improves accuracy on them. AGRO equips G-DRO with an adversarial slicing model to find a group assignment for training examples which maximizes worst-case loss over the discovered groups. On the WILDS benchmark, AGRO results in 8% higher model performance on average on known worst-groups, compared to prior group discovery approaches used with G-DRO. AGRO also improves out-of-distribution performance on SST2, QQP, and MS-COCO -- datasets where potential spurious correlations are as yet uncharacterized. Human evaluation of ARGO groups shows that they contain well-defined, yet previously unstudied spurious correlations that lead to model errors.

Viaarxiv icon