Alert button
Picture for Manuel Tonneau

Manuel Tonneau

Alert button

Casteist but Not Racist? Quantifying Disparities in Large Language Model Bias between India and the West

Sep 15, 2023
Khyati Khandelwal, Manuel Tonneau, Andrew M. Bean, Hannah Rose Kirk, Scott A. Hale

Large Language Models (LLMs), now used daily by millions of users, can encode societal biases, exposing their users to representational harms. A large body of scholarship on LLM bias exists but it predominantly adopts a Western-centric frame and attends comparatively less to bias levels and potential harms in the Global South. In this paper, we quantify stereotypical bias in popular LLMs according to an Indian-centric frame and compare bias levels between the Indian and Western contexts. To do this, we develop a novel dataset which we call Indian-BhED (Indian Bias Evaluation Dataset), containing stereotypical and anti-stereotypical examples for caste and religion contexts. We find that the majority of LLMs tested are strongly biased towards stereotypes in the Indian context, especially as compared to the Western context. We finally investigate Instruction Prompting as a simple intervention to mitigate such bias and find that it significantly reduces both stereotypical and anti-stereotypical biases in the majority of cases for GPT-3.5. The findings of this work highlight the need for including more diverse voices when evaluating LLMs.

Viaarxiv icon

Multilingual Detection of Personal Employment Status on Twitter

Mar 17, 2022
Manuel Tonneau, Dhaval Adjodah, João Palotti, Nir Grinberg, Samuel Fraiberger

Figure 1 for Multilingual Detection of Personal Employment Status on Twitter
Figure 2 for Multilingual Detection of Personal Employment Status on Twitter
Figure 3 for Multilingual Detection of Personal Employment Status on Twitter
Figure 4 for Multilingual Detection of Personal Employment Status on Twitter

Detecting disclosures of individuals' employment status on social media can provide valuable information to match job seekers with suitable vacancies, offer social protection, or measure labor market flows. However, identifying such personal disclosures is a challenging task due to their rarity in a sea of social media content and the variety of linguistic forms used to describe them. Here, we examine three Active Learning (AL) strategies in real-world settings of extreme class imbalance, and identify five types of disclosures about individuals' employment status (e.g. job loss) in three languages using BERT-based classification models. Our findings show that, even under extreme imbalance settings, a small number of AL iterations is sufficient to obtain large and significant gains in precision, recall, and diversity of results compared to a supervised baseline with the same number of labels. We also find that no AL strategy consistently outperforms the rest. Qualitative analysis suggests that AL helps focus the attention mechanism of BERT on core terms and adjust the boundaries of semantic expansion, highlighting the importance of interpretable models to provide greater control and visibility into this dynamic learning process.

* ACL 2022 main conference. Data and models available at https://github.com/manueltonneau/twitter-unemployment 
Viaarxiv icon