Picture for Indira Sen

Indira Sen

Tell Me What You Know About Sexism: Expert-LLM Interaction Strategies and Co-Created Definitions for Zero-Shot Sexism Detection

Add code
Apr 21, 2025
Viaarxiv icon

Only a Little to the Left: A Theory-grounded Measure of Political Bias in Large Language Models

Add code
Mar 20, 2025
Viaarxiv icon

Sensitive Content Classification in Social Media: A Holistic Resource and Evaluation

Add code
Nov 29, 2024
Figure 1 for Sensitive Content Classification in Social Media: A Holistic Resource and Evaluation
Figure 2 for Sensitive Content Classification in Social Media: A Holistic Resource and Evaluation
Figure 3 for Sensitive Content Classification in Social Media: A Holistic Resource and Evaluation
Figure 4 for Sensitive Content Classification in Social Media: A Holistic Resource and Evaluation
Viaarxiv icon

Robustness and Confounders in the Demographic Alignment of LLMs with Human Perceptions of Offensiveness

Add code
Nov 13, 2024
Figure 1 for Robustness and Confounders in the Demographic Alignment of LLMs with Human Perceptions of Offensiveness
Figure 2 for Robustness and Confounders in the Demographic Alignment of LLMs with Human Perceptions of Offensiveness
Figure 3 for Robustness and Confounders in the Demographic Alignment of LLMs with Human Perceptions of Offensiveness
Figure 4 for Robustness and Confounders in the Demographic Alignment of LLMs with Human Perceptions of Offensiveness
Viaarxiv icon

From Measurement Instruments to Data: Leveraging Theory-Driven Synthetic Training Data for Classifying Social Constructs

Add code
Oct 17, 2024
Viaarxiv icon

An Open Multilingual System for Scoring Readability of Wikipedia

Add code
Jun 03, 2024
Figure 1 for An Open Multilingual System for Scoring Readability of Wikipedia
Figure 2 for An Open Multilingual System for Scoring Readability of Wikipedia
Figure 3 for An Open Multilingual System for Scoring Readability of Wikipedia
Figure 4 for An Open Multilingual System for Scoring Readability of Wikipedia
Viaarxiv icon

The Unseen Targets of Hate -- A Systematic Review of Hateful Communication Datasets

Add code
May 14, 2024
Viaarxiv icon

People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful Language Detection

Add code
Nov 02, 2023
Figure 1 for People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful Language Detection
Figure 2 for People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful Language Detection
Figure 3 for People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful Language Detection
Figure 4 for People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful Language Detection
Viaarxiv icon

Counterfactually Augmented Data and Unintended Bias: The Case of Sexism and Hate Speech Detection

Add code
May 09, 2022
Figure 1 for Counterfactually Augmented Data and Unintended Bias: The Case of Sexism and Hate Speech Detection
Figure 2 for Counterfactually Augmented Data and Unintended Bias: The Case of Sexism and Hate Speech Detection
Figure 3 for Counterfactually Augmented Data and Unintended Bias: The Case of Sexism and Hate Speech Detection
Figure 4 for Counterfactually Augmented Data and Unintended Bias: The Case of Sexism and Hate Speech Detection
Viaarxiv icon

"Unsex me here": Revisiting Sexism Detection Using Psychological Scales and Adversarial Samples

Add code
Apr 27, 2020
Figure 1 for "Unsex me here": Revisiting Sexism Detection Using Psychological Scales and Adversarial Samples
Figure 2 for "Unsex me here": Revisiting Sexism Detection Using Psychological Scales and Adversarial Samples
Figure 3 for "Unsex me here": Revisiting Sexism Detection Using Psychological Scales and Adversarial Samples
Figure 4 for "Unsex me here": Revisiting Sexism Detection Using Psychological Scales and Adversarial Samples
Viaarxiv icon