Alert button
Picture for Eve Fleisig

Eve Fleisig

Alert button

Incorporating Worker Perspectives into MTurk Annotation Practices for NLP

Nov 16, 2023
Olivia Huang, Eve Fleisig, Dan Klein

Current practices regarding data collection for natural language processing on Amazon Mechanical Turk (MTurk) often rely on a combination of studies on data quality and heuristics shared among NLP researchers. However, without considering the perspectives of MTurk workers, these approaches are susceptible to issues regarding workers' rights and poor response quality. We conducted a critical literature review and a survey of MTurk workers aimed at addressing open questions regarding best practices for fair payment, worker privacy, data quality, and considering worker incentives. We found that worker preferences are often at odds with received wisdom among NLP researchers. Surveyed workers preferred reliable, reasonable payments over uncertain, very high payments; reported frequently lying on demographic questions; and expressed frustration at having work rejected with no explanation. We also found that workers view some quality control methods, such as requiring minimum response times or Master's qualifications, as biased and largely ineffective. Based on the survey results, we provide recommendations on how future NLP studies may better account for MTurk workers' experiences in order to respect workers' rights and improve data quality.

Viaarxiv icon

First Tragedy, then Parse: History Repeats Itself in the New Era of Large Language Models

Nov 08, 2023
Naomi Saphra, Eve Fleisig, Kyunghyun Cho, Adam Lopez

Many NLP researchers are experiencing an existential crisis triggered by the astonishing success of ChatGPT and other systems based on large language models (LLMs). After such a disruptive change to our understanding of the field, what is left to do? Taking a historical lens, we look for guidance from the first era of LLMs, which began in 2005 with large $n$-gram models for machine translation. We identify durable lessons from the first era, and more importantly, we identify evergreen problems where NLP researchers can continue to make meaningful contributions in areas where LLMs are ascendant. Among these lessons, we discuss the primacy of hardware advancement in shaping the availability and importance of scale, as well as the urgent challenge of quality evaluation, both automated and human. We argue that disparities in scale are transient and that researchers can work to reduce them; that data, rather than hardware, is still a bottleneck for many meaningful applications; that meaningful evaluation informed by actual use is still an open problem; and that there is still room for speculative approaches.

Viaarxiv icon

Ghostbuster: Detecting Text Ghostwritten by Large Language Models

May 24, 2023
Vivek Verma, Eve Fleisig, Nicholas Tomlin, Dan Klein

Figure 1 for Ghostbuster: Detecting Text Ghostwritten by Large Language Models
Figure 2 for Ghostbuster: Detecting Text Ghostwritten by Large Language Models
Figure 3 for Ghostbuster: Detecting Text Ghostwritten by Large Language Models
Figure 4 for Ghostbuster: Detecting Text Ghostwritten by Large Language Models

We introduce Ghostbuster, a state-of-the-art system for detecting AI-generated text. Our method works by passing documents through a series of weaker language models and running a structured search over possible combinations of their features, then training a classifier on the selected features to determine if the target document was AI-generated. Crucially, Ghostbuster does not require access to token probabilities from the target model, making it useful for detecting text generated by black-box models or unknown model versions. In conjunction with our model, we release three new datasets of human and AI-generated text as detection benchmarks that cover multiple domains (student essays, creative fiction, and news) and task setups: document-level detection, author identification, and a challenge task of paragraph-level detection. Ghostbuster averages 99.1 F1 across all three datasets on document-level detection, outperforming previous approaches such as GPTZero and DetectGPT by up to 32.7 F1.

Viaarxiv icon

When the Majority is Wrong: Modeling Annotator Disagreement for Subjective Tasks

May 24, 2023
Eve Fleisig, Rediet Abebe, Dan Klein

Figure 1 for When the Majority is Wrong: Modeling Annotator Disagreement for Subjective Tasks
Figure 2 for When the Majority is Wrong: Modeling Annotator Disagreement for Subjective Tasks
Figure 3 for When the Majority is Wrong: Modeling Annotator Disagreement for Subjective Tasks
Figure 4 for When the Majority is Wrong: Modeling Annotator Disagreement for Subjective Tasks

Though majority vote among annotators is typically used for ground truth labels in natural language processing, annotator disagreement in tasks such as hate speech detection may reflect differences in opinion across groups, not noise. Thus, a crucial problem in hate speech detection is determining whether a statement is offensive to the demographic group that it targets, when that group may constitute a small fraction of the annotator pool. We construct a model that predicts individual annotator ratings on potentially offensive text and combines this information with the predicted target group of the text to model the opinions of target group members. We show gains across a range of metrics, including raising performance over the baseline by 22% at predicting individual annotators' ratings and by 33% at predicting variance among annotators, which provides a metric for model uncertainty downstream. We find that annotator ratings can be predicted using their demographic information and opinions on online content, without the need to track identifying annotator IDs that link each annotator to their ratings. We also find that use of non-invasive survey questions on annotators' online experiences helps to maximize privacy and minimize unnecessary collection of demographic information when predicting annotators' opinions.

Viaarxiv icon

Centering the Margins: Outlier-Based Identification of Harmed Populations in Toxicity Detection

May 24, 2023
Vyoma Raman, Eve Fleisig, Dan Klein

Figure 1 for Centering the Margins: Outlier-Based Identification of Harmed Populations in Toxicity Detection
Figure 2 for Centering the Margins: Outlier-Based Identification of Harmed Populations in Toxicity Detection
Figure 3 for Centering the Margins: Outlier-Based Identification of Harmed Populations in Toxicity Detection
Figure 4 for Centering the Margins: Outlier-Based Identification of Harmed Populations in Toxicity Detection

A standard method for measuring the impacts of AI on marginalized communities is to determine performance discrepancies between specified demographic groups. These approaches aim to address harms toward vulnerable groups, but they obscure harm patterns faced by intersectional subgroups or shared across demographic groups. We instead operationalize "the margins" as data points that are statistical outliers due to having demographic attributes distant from the "norm" and measure harms toward these outliers. We propose a Group-Based Performance Disparity Index (GPDI) that measures the extent to which a subdivision of a dataset into subgroups identifies those facing increased harms. We apply our approach to detecting disparities in toxicity detection and find that text targeting outliers is 28% to 86% more toxic for all types of toxicity examined. We also discover that model performance is consistently worse for demographic outliers, with disparities in error between outliers and non-outliers ranging from 28% to 71% across toxicity types. Our outlier-based analysis has comparable or higher GPDI than traditional subgroup-based analyses, suggesting that outlier analysis enhances identification of subgroups facing greater harms. Finally, we find that minoritized racial and religious groups are most associated with outliers, which suggests that outlier analysis is particularly beneficial for identifying harms against those groups.

Viaarxiv icon

When the Majority is Wrong: Leveraging Annotator Disagreement for Subjective Tasks

May 11, 2023
Eve Fleisig, Rediet Abebe, Dan Klein

Figure 1 for When the Majority is Wrong: Leveraging Annotator Disagreement for Subjective Tasks
Figure 2 for When the Majority is Wrong: Leveraging Annotator Disagreement for Subjective Tasks
Figure 3 for When the Majority is Wrong: Leveraging Annotator Disagreement for Subjective Tasks
Figure 4 for When the Majority is Wrong: Leveraging Annotator Disagreement for Subjective Tasks

Though majority vote among annotators is typically used for ground truth labels in natural language processing, annotator disagreement in tasks such as hate speech detection may reflect differences among group opinions, not noise. Thus, a crucial problem in hate speech detection is whether a statement is offensive to the demographic group that it targets, which may constitute a small fraction of the annotator pool. We construct a model that predicts individual annotator ratings on potentially offensive text and combines this information with the predicted target group of the text to model the opinions of target group members. We show gains across a range of metrics, including raising performance over the baseline by 22% at predicting individual annotators' ratings and 33% at predicting variance among annotators, which provides a method of measuring model uncertainty downstream. We find that annotators' ratings can be predicted using their demographic information and opinions on online content, without the need to track identifying annotator IDs that link each annotator to their ratings. We also find that use of non-invasive survey questions on annotators' online experiences helps to maximize privacy and minimize unnecessary collection of demographic information when predicting annotators' opinions.

Viaarxiv icon

Mitigating Gender Bias in Machine Translation through Adversarial Learning

Mar 20, 2022
Eve Fleisig, Christiane Fellbaum

Figure 1 for Mitigating Gender Bias in Machine Translation through Adversarial Learning
Figure 2 for Mitigating Gender Bias in Machine Translation through Adversarial Learning
Figure 3 for Mitigating Gender Bias in Machine Translation through Adversarial Learning
Figure 4 for Mitigating Gender Bias in Machine Translation through Adversarial Learning

Machine translation and other NLP systems often contain significant biases regarding sensitive attributes, such as gender or race, that worsen system performance and perpetuate harmful stereotypes. Recent preliminary research suggests that adversarial learning can be used as part of a model-agnostic bias mitigation method that requires no data modifications. However, adapting this strategy for machine translation and other modern NLP domains requires (1) restructuring training objectives in the context of fine-tuning pretrained large language models and (2) developing measures for gender or other protected variables for tasks in which these attributes must be deduced from the data itself. We present an adversarial learning framework that addresses these challenges to mitigate gender bias in seq2seq machine translation. Our framework improves the disparity in translation quality for sentences with male vs. female entities by 86% for English-German translation and 91% for English-French translation, with minimal effect on translation quality. The results suggest that adversarial learning is a promising technique for mitigating gender bias in machine translation.

Viaarxiv icon

Sentiment Analysis for Reinforcement Learning

Oct 05, 2020
Ameet Deshpande, Eve Fleisig

Figure 1 for Sentiment Analysis for Reinforcement Learning
Figure 2 for Sentiment Analysis for Reinforcement Learning
Figure 3 for Sentiment Analysis for Reinforcement Learning
Figure 4 for Sentiment Analysis for Reinforcement Learning

While reinforcement learning (RL) has been successful in natural language processing (NLP) domains such as dialogue generation and text-based games, it typically faces the problem of sparse rewards that leads to slow or no convergence. Traditional methods that use text descriptions to extract only a state representation ignore the feedback inherently present in them. In text-based games, for example, descriptions like "Good Job! You ate the food}" indicate progress, and descriptions like "You entered a new room" indicate exploration. Positive and negative cues like these can be converted to rewards through sentiment analysis. This technique converts the sparse reward problem into a dense one, which is easier to solve. Furthermore, this can enable reinforcement learning without rewards, in which the agent learns entirely from these intrinsic sentiment rewards. This framework is similar to intrinsic motivation, where the environment does not necessarily provide the rewards, but the agent analyzes and realizes them by itself. We find that providing dense rewards in text-based games using sentiment analysis improves performance under some conditions.

* Work in progress 
Viaarxiv icon