Picture for Sorelle A. Friedler

Sorelle A. Friedler

Measuring and mitigating voting access disparities: a study of race and polling locations in Florida and North Carolina

May 30, 2022
Figure 1 for Measuring and mitigating voting access disparities: a study of race and polling locations in Florida and North Carolina
Figure 2 for Measuring and mitigating voting access disparities: a study of race and polling locations in Florida and North Carolina
Figure 3 for Measuring and mitigating voting access disparities: a study of race and polling locations in Florida and North Carolina
Figure 4 for Measuring and mitigating voting access disparities: a study of race and polling locations in Florida and North Carolina
Viaarxiv icon

Energy Usage Reports: Environmental awareness as part of algorithmic accountability

Add code
Dec 16, 2019
Figure 1 for Energy Usage Reports: Environmental awareness as part of algorithmic accountability
Figure 2 for Energy Usage Reports: Environmental awareness as part of algorithmic accountability
Figure 3 for Energy Usage Reports: Environmental awareness as part of algorithmic accountability
Figure 4 for Energy Usage Reports: Environmental awareness as part of algorithmic accountability
Viaarxiv icon

Disentangling Influence: Using Disentangled Representations to Audit Model Predictions

Add code
Jun 20, 2019
Figure 1 for Disentangling Influence: Using Disentangled Representations to Audit Model Predictions
Figure 2 for Disentangling Influence: Using Disentangled Representations to Audit Model Predictions
Figure 3 for Disentangling Influence: Using Disentangled Representations to Audit Model Predictions
Figure 4 for Disentangling Influence: Using Disentangled Representations to Audit Model Predictions
Viaarxiv icon

Assessing the Local Interpretability of Machine Learning Models

Feb 09, 2019
Figure 1 for Assessing the Local Interpretability of Machine Learning Models
Figure 2 for Assessing the Local Interpretability of Machine Learning Models
Figure 3 for Assessing the Local Interpretability of Machine Learning Models
Figure 4 for Assessing the Local Interpretability of Machine Learning Models
Viaarxiv icon

Fairness in representation: quantifying stereotyping as a representational harm

Jan 28, 2019
Figure 1 for Fairness in representation: quantifying stereotyping as a representational harm
Figure 2 for Fairness in representation: quantifying stereotyping as a representational harm
Figure 3 for Fairness in representation: quantifying stereotyping as a representational harm
Figure 4 for Fairness in representation: quantifying stereotyping as a representational harm
Viaarxiv icon

Interpretable Active Learning

Add code
Jun 24, 2018
Figure 1 for Interpretable Active Learning
Figure 2 for Interpretable Active Learning
Figure 3 for Interpretable Active Learning
Figure 4 for Interpretable Active Learning
Viaarxiv icon

A comparative study of fairness-enhancing interventions in machine learning

Add code
Feb 13, 2018
Figure 1 for A comparative study of fairness-enhancing interventions in machine learning
Figure 2 for A comparative study of fairness-enhancing interventions in machine learning
Figure 3 for A comparative study of fairness-enhancing interventions in machine learning
Figure 4 for A comparative study of fairness-enhancing interventions in machine learning
Viaarxiv icon

Runaway Feedback Loops in Predictive Policing

Add code
Dec 22, 2017
Figure 1 for Runaway Feedback Loops in Predictive Policing
Figure 2 for Runaway Feedback Loops in Predictive Policing
Viaarxiv icon

Auditing Black-box Models for Indirect Influence

Add code
Nov 30, 2016
Figure 1 for Auditing Black-box Models for Indirect Influence
Figure 2 for Auditing Black-box Models for Indirect Influence
Figure 3 for Auditing Black-box Models for Indirect Influence
Figure 4 for Auditing Black-box Models for Indirect Influence
Viaarxiv icon

On the (im)possibility of fairness

Add code
Sep 23, 2016
Viaarxiv icon