Picture for Yongsu Ahn

Yongsu Ahn

Human-centered explanation does not fit all: The interplay of sociotechnical, cognitive, and individual factors in the effect AI explanations in algorithmic decision-making

Add code
Feb 17, 2025
Viaarxiv icon

Gender Bias in LLM-generated Interview Responses

Add code
Oct 28, 2024
Figure 1 for Gender Bias in LLM-generated Interview Responses
Figure 2 for Gender Bias in LLM-generated Interview Responses
Figure 3 for Gender Bias in LLM-generated Interview Responses
Figure 4 for Gender Bias in LLM-generated Interview Responses
Viaarxiv icon

Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems

Add code
Sep 10, 2024
Viaarxiv icon

Exploring Teachers' Perception of Artificial Intelligence: The Socio-emotional Deficiency as Opportunities and Challenges in Human-AI Complementarity in K-12 Education

Add code
May 20, 2024
Figure 1 for Exploring Teachers' Perception of Artificial Intelligence: The Socio-emotional Deficiency as Opportunities and Challenges in Human-AI Complementarity in K-12 Education
Figure 2 for Exploring Teachers' Perception of Artificial Intelligence: The Socio-emotional Deficiency as Opportunities and Challenges in Human-AI Complementarity in K-12 Education
Viaarxiv icon

Break Out of a Pigeonhole: A Unified Framework for Examining Miscalibration, Bias, and Stereotype in Recommender Systems

Add code
Dec 29, 2023
Figure 1 for Break Out of a Pigeonhole: A Unified Framework for Examining Miscalibration, Bias, and Stereotype in Recommender Systems
Figure 2 for Break Out of a Pigeonhole: A Unified Framework for Examining Miscalibration, Bias, and Stereotype in Recommender Systems
Figure 3 for Break Out of a Pigeonhole: A Unified Framework for Examining Miscalibration, Bias, and Stereotype in Recommender Systems
Figure 4 for Break Out of a Pigeonhole: A Unified Framework for Examining Miscalibration, Bias, and Stereotype in Recommender Systems
Viaarxiv icon

HungerGist: An Interpretable Predictive Model for Food Insecurity

Add code
Nov 18, 2023
Viaarxiv icon

VISPUR: Visual Aids for Identifying and Interpreting Spurious Associations in Data-Driven Decisions

Add code
Jul 26, 2023
Viaarxiv icon

ESCAPE: Countering Systematic Errors from Machine's Blind Spots via Interactive Visual Analysis

Add code
Mar 16, 2023
Figure 1 for ESCAPE: Countering Systematic Errors from Machine's Blind Spots via Interactive Visual Analysis
Figure 2 for ESCAPE: Countering Systematic Errors from Machine's Blind Spots via Interactive Visual Analysis
Figure 3 for ESCAPE: Countering Systematic Errors from Machine's Blind Spots via Interactive Visual Analysis
Figure 4 for ESCAPE: Countering Systematic Errors from Machine's Blind Spots via Interactive Visual Analysis
Viaarxiv icon

Tribe or Not? Critical Inspection of Group Differences Using TribalGram

Add code
Mar 16, 2023
Figure 1 for Tribe or Not? Critical Inspection of Group Differences Using TribalGram
Figure 2 for Tribe or Not? Critical Inspection of Group Differences Using TribalGram
Viaarxiv icon

FairSight: Visual Analytics for Fairness in Decision Making

Add code
Aug 01, 2019
Figure 1 for FairSight: Visual Analytics for Fairness in Decision Making
Figure 2 for FairSight: Visual Analytics for Fairness in Decision Making
Figure 3 for FairSight: Visual Analytics for Fairness in Decision Making
Figure 4 for FairSight: Visual Analytics for Fairness in Decision Making
Viaarxiv icon