Picture for Kristian Lum

Kristian Lum

Evaluating Language Models for Harmful Manipulation

Add code
Mar 26, 2026
Viaarxiv icon

The Intersectionality Problem for Algorithmic Fairness

Add code
Nov 04, 2024
Viaarxiv icon

Imagen 3

Add code
Aug 13, 2024
Viaarxiv icon

STAR: SocioTechnical Approach to Red Teaming Language Models

Add code
Jun 17, 2024
Viaarxiv icon

The Impossibility of Fair LLMs

Add code
May 28, 2024
Viaarxiv icon

Bias in Language Models: Beyond Trick Tests and Toward RUTEd Evaluation

Add code
Feb 20, 2024
Figure 1 for Bias in Language Models: Beyond Trick Tests and Toward RUTEd Evaluation
Viaarxiv icon

Random Isn't Always Fair: Candidate Set Imbalance and Exposure Inequality in Recommender Systems

Add code
Sep 12, 2022
Figure 1 for Random Isn't Always Fair: Candidate Set Imbalance and Exposure Inequality in Recommender Systems
Figure 2 for Random Isn't Always Fair: Candidate Set Imbalance and Exposure Inequality in Recommender Systems
Figure 3 for Random Isn't Always Fair: Candidate Set Imbalance and Exposure Inequality in Recommender Systems
Figure 4 for Random Isn't Always Fair: Candidate Set Imbalance and Exposure Inequality in Recommender Systems
Viaarxiv icon

Measuring and mitigating voting access disparities: a study of race and polling locations in Florida and North Carolina

Add code
May 30, 2022
Figure 1 for Measuring and mitigating voting access disparities: a study of race and polling locations in Florida and North Carolina
Figure 2 for Measuring and mitigating voting access disparities: a study of race and polling locations in Florida and North Carolina
Figure 3 for Measuring and mitigating voting access disparities: a study of race and polling locations in Florida and North Carolina
Figure 4 for Measuring and mitigating voting access disparities: a study of race and polling locations in Florida and North Carolina
Viaarxiv icon

De-biasing "bias" measurement

Add code
May 11, 2022
Figure 1 for De-biasing "bias" measurement
Figure 2 for De-biasing "bias" measurement
Figure 3 for De-biasing "bias" measurement
Figure 4 for De-biasing "bias" measurement
Viaarxiv icon

Measuring Disparate Outcomes of Content Recommendation Algorithms with Distributional Inequality Metrics

Add code
Feb 03, 2022
Figure 1 for Measuring Disparate Outcomes of Content Recommendation Algorithms with Distributional Inequality Metrics
Figure 2 for Measuring Disparate Outcomes of Content Recommendation Algorithms with Distributional Inequality Metrics
Figure 3 for Measuring Disparate Outcomes of Content Recommendation Algorithms with Distributional Inequality Metrics
Figure 4 for Measuring Disparate Outcomes of Content Recommendation Algorithms with Distributional Inequality Metrics
Viaarxiv icon