Alert button
Picture for Maria De-Arteaga

Maria De-Arteaga

Alert button

More Data Can Lead Us Astray: Active Data Acquisition in the Presence of Label Bias

Add code
Bookmark button
Alert button
Jul 15, 2022
Yunyi Li, Maria De-Arteaga, Maytal Saar-Tsechansky

Figure 1 for More Data Can Lead Us Astray: Active Data Acquisition in the Presence of Label Bias
Figure 2 for More Data Can Lead Us Astray: Active Data Acquisition in the Presence of Label Bias
Figure 3 for More Data Can Lead Us Astray: Active Data Acquisition in the Presence of Label Bias
Figure 4 for More Data Can Lead Us Astray: Active Data Acquisition in the Presence of Label Bias
Viaarxiv icon

Doubting AI Predictions: Influence-Driven Second Opinion Recommendation

Add code
Bookmark button
Alert button
Apr 29, 2022
Maria De-Arteaga, Alexandra Chouldechova, Artur Dubrawski

Figure 1 for Doubting AI Predictions: Influence-Driven Second Opinion Recommendation
Figure 2 for Doubting AI Predictions: Influence-Driven Second Opinion Recommendation
Viaarxiv icon

Justice in Misinformation Detection Systems: An Analysis of Algorithms, Stakeholders, and Potential Harms

Add code
Bookmark button
Alert button
Apr 29, 2022
Terrence Neumann, Maria De-Arteaga, Sina Fazelpour

Figure 1 for Justice in Misinformation Detection Systems: An Analysis of Algorithms, Stakeholders, and Potential Harms
Figure 2 for Justice in Misinformation Detection Systems: An Analysis of Algorithms, Stakeholders, and Potential Harms
Figure 3 for Justice in Misinformation Detection Systems: An Analysis of Algorithms, Stakeholders, and Potential Harms
Viaarxiv icon

On the Relationship Between Explanations, Fairness Perceptions, and Decisions

Add code
Bookmark button
Alert button
Apr 29, 2022
Jakob Schoeffer, Maria De-Arteaga, Niklas Kuehl

Figure 1 for On the Relationship Between Explanations, Fairness Perceptions, and Decisions
Viaarxiv icon

Social Norm Bias: Residual Harms of Fairness-Aware Algorithms

Add code
Bookmark button
Alert button
Aug 29, 2021
Myra Cheng, Maria De-Arteaga, Lester Mackey, Adam Tauman Kalai

Figure 1 for Social Norm Bias: Residual Harms of Fairness-Aware Algorithms
Figure 2 for Social Norm Bias: Residual Harms of Fairness-Aware Algorithms
Figure 3 for Social Norm Bias: Residual Harms of Fairness-Aware Algorithms
Figure 4 for Social Norm Bias: Residual Harms of Fairness-Aware Algorithms
Viaarxiv icon

The effect of differential victim crime reporting on predictive policing systems

Add code
Bookmark button
Alert button
Feb 04, 2021
Nil-Jana Akpinar, Maria De-Arteaga, Alexandra Chouldechova

Figure 1 for The effect of differential victim crime reporting on predictive policing systems
Figure 2 for The effect of differential victim crime reporting on predictive policing systems
Figure 3 for The effect of differential victim crime reporting on predictive policing systems
Figure 4 for The effect of differential victim crime reporting on predictive policing systems
Viaarxiv icon

Leveraging Expert Consistency to Improve Algorithmic Decision Support

Add code
Bookmark button
Alert button
Jan 24, 2021
Maria De-Arteaga, Artur Dubrawski, Alexandra Chouldechova

Figure 1 for Leveraging Expert Consistency to Improve Algorithmic Decision Support
Figure 2 for Leveraging Expert Consistency to Improve Algorithmic Decision Support
Figure 3 for Leveraging Expert Consistency to Improve Algorithmic Decision Support
Figure 4 for Leveraging Expert Consistency to Improve Algorithmic Decision Support
Viaarxiv icon

What's in a Name? Reducing Bias in Bios without Access to Protected Attributes

Add code
Bookmark button
Alert button
Apr 10, 2019
Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky, Adam Tauman Kalai

Figure 1 for What's in a Name? Reducing Bias in Bios without Access to Protected Attributes
Figure 2 for What's in a Name? Reducing Bias in Bios without Access to Protected Attributes
Figure 3 for What's in a Name? Reducing Bias in Bios without Access to Protected Attributes
Figure 4 for What's in a Name? Reducing Bias in Bios without Access to Protected Attributes
Viaarxiv icon

Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting

Add code
Bookmark button
Alert button
Jan 27, 2019
Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Adam Tauman Kalai

Figure 1 for Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting
Figure 2 for Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting
Figure 3 for Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting
Figure 4 for Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting
Viaarxiv icon