Get our free extension to see links to code for papers anywhere online!

 Add to Chrome

 Add to Firefox

CatalyzeX Code Finder - Browser extension linking code for ML papers across the web! | Product Hunt Embed
Counterfactual Predictions under Runtime Confounding

Jun 30, 2020
Amanda Coston, Edward H. Kennedy, Alexandra Chouldechova


  Access Paper or Ask Questions

Counterfactual Risk Assessments, Evaluation, and Fairness

Aug 30, 2019
Amanda Coston, Alexandra Chouldechova, Edward H. Kennedy


  Access Paper or Ask Questions

What's in a Name? Reducing Bias in Bios without Access to Protected Attributes

Apr 10, 2019
Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky, Adam Tauman Kalai

* Accepted at NAACL 2019; Best Thematic Paper 

  Access Paper or Ask Questions

Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting

Jan 27, 2019
Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Adam Tauman Kalai

* Accepted at ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*), 2019 

  Access Paper or Ask Questions

The Frontiers of Fairness in Machine Learning

Oct 20, 2018
Alexandra Chouldechova, Aaron Roth


  Access Paper or Ask Questions

Learning under selective labels in the presence of expert consistency

Jul 04, 2018
Maria De-Arteaga, Artur Dubrawski, Alexandra Chouldechova

* Presented at the 2018 Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2018) 

  Access Paper or Ask Questions

Does mitigating ML's impact disparity require treatment disparity?

Feb 28, 2018
Zachary C. Lipton, Alexandra Chouldechova, Julian McAuley


  Access Paper or Ask Questions

Fairer and more accurate, but for whom?

Jun 30, 2017
Alexandra Chouldechova, Max G'Sell

* Presented as a poster at the 2017 Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2017) 

  Access Paper or Ask Questions

Fair prediction with disparate impact: A study of bias in recidivism prediction instruments

Feb 28, 2017
Alexandra Chouldechova

* The short conference version of the paper was previously uploaded as arXiv:1610.07524 

  Access Paper or Ask Questions

Generalized Additive Model Selection

Jun 17, 2015
Alexandra Chouldechova, Trevor Hastie

* 23 pages, 10 figures 

  Access Paper or Ask Questions