Picture for Alexandra Chouldechova

Alexandra Chouldechova

Understanding and Meeting Practitioner Needs When Measuring Representational Harms Caused by LLM-Based Systems

Add code
Jun 04, 2025
Viaarxiv icon

Taxonomizing Representational Harms using Speech Act Theory

Add code
Apr 01, 2025
Viaarxiv icon

Validating LLM-as-a-Judge Systems in the Absence of Gold Labels

Add code
Mar 07, 2025
Viaarxiv icon

Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice

Add code
Dec 09, 2024
Figure 1 for Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice
Figure 2 for Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice
Figure 3 for Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice
Figure 4 for Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice
Viaarxiv icon

A Framework for Evaluating LLMs Under Task Indeterminacy

Add code
Nov 21, 2024
Viaarxiv icon

SureMap: Simultaneous Mean Estimation for Single-Task and Multi-Task Disaggregated Evaluation

Add code
Nov 14, 2024
Viaarxiv icon

A structured regression approach for evaluating model performance across intersectional subgroups

Add code
Jan 26, 2024
Viaarxiv icon

The Impact of Differential Feature Under-reporting on Algorithmic Fairness

Add code
Jan 16, 2024
Figure 1 for The Impact of Differential Feature Under-reporting on Algorithmic Fairness
Figure 2 for The Impact of Differential Feature Under-reporting on Algorithmic Fairness
Figure 3 for The Impact of Differential Feature Under-reporting on Algorithmic Fairness
Figure 4 for The Impact of Differential Feature Under-reporting on Algorithmic Fairness
Viaarxiv icon

Multi-Target Multiplicity: Flexibility and Fairness in Target Specification under Resource Constraints

Add code
Jun 23, 2023
Figure 1 for Multi-Target Multiplicity: Flexibility and Fairness in Target Specification under Resource Constraints
Figure 2 for Multi-Target Multiplicity: Flexibility and Fairness in Target Specification under Resource Constraints
Figure 3 for Multi-Target Multiplicity: Flexibility and Fairness in Target Specification under Resource Constraints
Figure 4 for Multi-Target Multiplicity: Flexibility and Fairness in Target Specification under Resource Constraints
Viaarxiv icon

Examining risks of racial biases in NLP tools for child protective services

Add code
May 30, 2023
Figure 1 for Examining risks of racial biases in NLP tools for child protective services
Figure 2 for Examining risks of racial biases in NLP tools for child protective services
Figure 3 for Examining risks of racial biases in NLP tools for child protective services
Figure 4 for Examining risks of racial biases in NLP tools for child protective services
Viaarxiv icon