Alert button
Picture for Maria De-Arteaga

Maria De-Arteaga

Alert button

Diverse, but Divisive: LLMs Can Exaggerate Gender Differences in Opinion Related to Harms of Misinformation

Add code
Bookmark button
Alert button
Jan 29, 2024
Terrence Neumann, Sooyong Lee, Maria De-Arteaga, Sina Fazelpour, Matthew Lease

Viaarxiv icon

A Critical Survey on Fairness Benefits of XAI

Add code
Bookmark button
Alert button
Oct 15, 2023
Luca Deck, Jakob Schoeffer, Maria De-Arteaga, Niklas Kühl

Viaarxiv icon

Mitigating Label Bias via Decoupled Confident Learning

Add code
Bookmark button
Alert button
Jul 18, 2023
Yunyi Li, Maria De-Arteaga, Maytal Saar-Tsechansky

Figure 1 for Mitigating Label Bias via Decoupled Confident Learning
Figure 2 for Mitigating Label Bias via Decoupled Confident Learning
Figure 3 for Mitigating Label Bias via Decoupled Confident Learning
Figure 4 for Mitigating Label Bias via Decoupled Confident Learning
Viaarxiv icon

Human-Centered Responsible Artificial Intelligence: Current & Future Trends

Add code
Bookmark button
Alert button
Feb 16, 2023
Mohammad Tahaei, Marios Constantinides, Daniele Quercia, Sean Kennedy, Michael Muller, Simone Stumpf, Q. Vera Liao, Ricardo Baeza-Yates, Lora Aroyo, Jess Holbrook, Ewa Luger, Michael Madaio, Ilana Golbin Blumenfeld, Maria De-Arteaga, Jessica Vitak, Alexandra Olteanu

Figure 1 for Human-Centered Responsible Artificial Intelligence: Current & Future Trends
Viaarxiv icon

Same Same, But Different: Conditional Multi-Task Learning for Demographic-Specific Toxicity Detection

Add code
Bookmark button
Alert button
Feb 14, 2023
Soumyajit Gupta, Sooyong Lee, Maria De-Arteaga, Matthew Lease

Figure 1 for Same Same, But Different: Conditional Multi-Task Learning for Demographic-Specific Toxicity Detection
Figure 2 for Same Same, But Different: Conditional Multi-Task Learning for Demographic-Specific Toxicity Detection
Figure 3 for Same Same, But Different: Conditional Multi-Task Learning for Demographic-Specific Toxicity Detection
Figure 4 for Same Same, But Different: Conditional Multi-Task Learning for Demographic-Specific Toxicity Detection
Viaarxiv icon

Learning Complementary Policies for Human-AI Teams

Add code
Bookmark button
Alert button
Feb 06, 2023
Ruijiang Gao, Maytal Saar-Tsechansky, Maria De-Arteaga, Ligong Han, Wei Sun, Min Kyung Lee, Matthew Lease

Figure 1 for Learning Complementary Policies for Human-AI Teams
Figure 2 for Learning Complementary Policies for Human-AI Teams
Figure 3 for Learning Complementary Policies for Human-AI Teams
Figure 4 for Learning Complementary Policies for Human-AI Teams
Viaarxiv icon

On Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making

Add code
Bookmark button
Alert button
Sep 23, 2022
Jakob Schoeffer, Maria De-Arteaga, Niklas Kuehl

Figure 1 for On Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Figure 2 for On Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Figure 3 for On Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Figure 4 for On Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Viaarxiv icon

Imputation Strategies Under Clinical Presence: Impact on Algorithmic Fairness

Add code
Bookmark button
Alert button
Aug 13, 2022
Vincent Jeanselme, Maria De-Arteaga, Zhe Zhang, Jessica Barrett, Brian Tom

Figure 1 for Imputation Strategies Under Clinical Presence: Impact on Algorithmic Fairness
Figure 2 for Imputation Strategies Under Clinical Presence: Impact on Algorithmic Fairness
Figure 3 for Imputation Strategies Under Clinical Presence: Impact on Algorithmic Fairness
Figure 4 for Imputation Strategies Under Clinical Presence: Impact on Algorithmic Fairness
Viaarxiv icon

Toward Supporting Perceptual Complementarity in Human-AI Collaboration via Reflection on Unobservables

Add code
Bookmark button
Alert button
Jul 28, 2022
Kenneth Holstein, Maria De-Arteaga, Lakshmi Tumati, Yanghuidi Cheng

Figure 1 for Toward Supporting Perceptual Complementarity in Human-AI Collaboration via Reflection on Unobservables
Figure 2 for Toward Supporting Perceptual Complementarity in Human-AI Collaboration via Reflection on Unobservables
Figure 3 for Toward Supporting Perceptual Complementarity in Human-AI Collaboration via Reflection on Unobservables
Figure 4 for Toward Supporting Perceptual Complementarity in Human-AI Collaboration via Reflection on Unobservables
Viaarxiv icon

Algorithmic Fairness in Business Analytics: Directions for Research and Practice

Add code
Bookmark button
Alert button
Jul 22, 2022
Maria De-Arteaga, Stefan Feuerriegel, Maytal Saar-Tsechansky

Figure 1 for Algorithmic Fairness in Business Analytics: Directions for Research and Practice
Figure 2 for Algorithmic Fairness in Business Analytics: Directions for Research and Practice
Figure 3 for Algorithmic Fairness in Business Analytics: Directions for Research and Practice
Figure 4 for Algorithmic Fairness in Business Analytics: Directions for Research and Practice
Viaarxiv icon