Picture for Angelina Wang

Angelina Wang

The Limits of AI Data Transparency Policy: Three Disclosure Fallacies

Add code
Jan 26, 2026
Viaarxiv icon

Who Evaluates AI's Social Impacts? Mapping Coverage and Gaps in First and Third Party Evaluations

Add code
Nov 06, 2025
Figure 1 for Who Evaluates AI's Social Impacts? Mapping Coverage and Gaps in First and Third Party Evaluations
Figure 2 for Who Evaluates AI's Social Impacts? Mapping Coverage and Gaps in First and Third Party Evaluations
Figure 3 for Who Evaluates AI's Social Impacts? Mapping Coverage and Gaps in First and Third Party Evaluations
Figure 4 for Who Evaluates AI's Social Impacts? Mapping Coverage and Gaps in First and Third Party Evaluations
Viaarxiv icon

Rigor in AI: Doing Rigorous AI Work Requires a Broader, Responsible AI-Informed Conception of Rigor

Add code
Jun 17, 2025
Viaarxiv icon

Measurement to Meaning: A Validity-Centered Framework for AI Evaluation

Add code
May 13, 2025
Viaarxiv icon

Identities are not Interchangeable: The Problem of Overgeneralization in Fair Machine Learning

Add code
May 07, 2025
Figure 1 for Identities are not Interchangeable: The Problem of Overgeneralization in Fair Machine Learning
Figure 2 for Identities are not Interchangeable: The Problem of Overgeneralization in Fair Machine Learning
Figure 3 for Identities are not Interchangeable: The Problem of Overgeneralization in Fair Machine Learning
Viaarxiv icon

Toward an Evaluation Science for Generative AI Systems

Add code
Mar 07, 2025
Viaarxiv icon

Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs

Add code
Feb 04, 2025
Figure 1 for Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs
Figure 2 for Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs
Figure 3 for Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs
Figure 4 for Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs
Viaarxiv icon

Measuring Implicit Bias in Explicitly Unbiased Large Language Models

Add code
Feb 06, 2024
Viaarxiv icon

Measuring machine learning harms from stereotypes: requires understanding who is being harmed by which errors in what ways

Add code
Feb 06, 2024
Viaarxiv icon

Overcoming Bias in Pretrained Models by Manipulating the Finetuning Dataset

Add code
Mar 10, 2023
Viaarxiv icon