Picture for Maria De-Arteaga

Maria De-Arteaga

Fairly Accurate: Optimizing Accuracy Parity in Fair Target-Group Detection

Add code
Jul 16, 2024
Viaarxiv icon

Diverse, but Divisive: LLMs Can Exaggerate Gender Differences in Opinion Related to Harms of Misinformation

Add code
Jan 29, 2024
Viaarxiv icon

A Critical Survey on Fairness Benefits of XAI

Add code
Oct 15, 2023
Viaarxiv icon

Mitigating Label Bias via Decoupled Confident Learning

Add code
Jul 18, 2023
Figure 1 for Mitigating Label Bias via Decoupled Confident Learning
Figure 2 for Mitigating Label Bias via Decoupled Confident Learning
Figure 3 for Mitigating Label Bias via Decoupled Confident Learning
Figure 4 for Mitigating Label Bias via Decoupled Confident Learning
Viaarxiv icon

Human-Centered Responsible Artificial Intelligence: Current & Future Trends

Add code
Feb 16, 2023
Figure 1 for Human-Centered Responsible Artificial Intelligence: Current & Future Trends
Viaarxiv icon

Same Same, But Different: Conditional Multi-Task Learning for Demographic-Specific Toxicity Detection

Add code
Feb 14, 2023
Figure 1 for Same Same, But Different: Conditional Multi-Task Learning for Demographic-Specific Toxicity Detection
Figure 2 for Same Same, But Different: Conditional Multi-Task Learning for Demographic-Specific Toxicity Detection
Figure 3 for Same Same, But Different: Conditional Multi-Task Learning for Demographic-Specific Toxicity Detection
Figure 4 for Same Same, But Different: Conditional Multi-Task Learning for Demographic-Specific Toxicity Detection
Viaarxiv icon

Learning Complementary Policies for Human-AI Teams

Add code
Feb 06, 2023
Figure 1 for Learning Complementary Policies for Human-AI Teams
Figure 2 for Learning Complementary Policies for Human-AI Teams
Figure 3 for Learning Complementary Policies for Human-AI Teams
Figure 4 for Learning Complementary Policies for Human-AI Teams
Viaarxiv icon

On Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making

Add code
Sep 23, 2022
Figure 1 for On Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Figure 2 for On Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Figure 3 for On Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Figure 4 for On Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Viaarxiv icon

Imputation Strategies Under Clinical Presence: Impact on Algorithmic Fairness

Add code
Aug 13, 2022
Figure 1 for Imputation Strategies Under Clinical Presence: Impact on Algorithmic Fairness
Figure 2 for Imputation Strategies Under Clinical Presence: Impact on Algorithmic Fairness
Figure 3 for Imputation Strategies Under Clinical Presence: Impact on Algorithmic Fairness
Figure 4 for Imputation Strategies Under Clinical Presence: Impact on Algorithmic Fairness
Viaarxiv icon

Toward Supporting Perceptual Complementarity in Human-AI Collaboration via Reflection on Unobservables

Add code
Jul 28, 2022
Figure 1 for Toward Supporting Perceptual Complementarity in Human-AI Collaboration via Reflection on Unobservables
Figure 2 for Toward Supporting Perceptual Complementarity in Human-AI Collaboration via Reflection on Unobservables
Figure 3 for Toward Supporting Perceptual Complementarity in Human-AI Collaboration via Reflection on Unobservables
Figure 4 for Toward Supporting Perceptual Complementarity in Human-AI Collaboration via Reflection on Unobservables
Viaarxiv icon