Picture for Sunnie S. Y. Kim

Sunnie S. Y. Kim

PersonaTeaming: Exploring How Introducing Personas Can Improve Automated AI Red-Teaming

Add code
Sep 03, 2025
Viaarxiv icon

Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations

Add code
Apr 14, 2025
Viaarxiv icon

Fostering Appropriate Reliance on Large Language Models: The Role of Explanations, Sources, and Inconsistencies

Add code
Feb 12, 2025
Figure 1 for Fostering Appropriate Reliance on Large Language Models: The Role of Explanations, Sources, and Inconsistencies
Figure 2 for Fostering Appropriate Reliance on Large Language Models: The Role of Explanations, Sources, and Inconsistencies
Figure 3 for Fostering Appropriate Reliance on Large Language Models: The Role of Explanations, Sources, and Inconsistencies
Figure 4 for Fostering Appropriate Reliance on Large Language Models: The Role of Explanations, Sources, and Inconsistencies
Viaarxiv icon

"I'm Not Sure, But": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust

Add code
May 01, 2024
Figure 1 for "I'm Not Sure, But": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust
Figure 2 for "I'm Not Sure, But": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust
Figure 3 for "I'm Not Sure, But": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust
Figure 4 for "I'm Not Sure, But": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust
Viaarxiv icon

Allowing humans to interactively guide machines where to look does not always improve human-AI team's classification accuracy

Add code
Apr 14, 2024
Viaarxiv icon

WiCV@CVPR2023: The Eleventh Women In Computer Vision Workshop at the Annual CVPR Conference

Add code
Sep 22, 2023
Viaarxiv icon

Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application

Add code
May 15, 2023
Figure 1 for Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application
Figure 2 for Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application
Figure 3 for Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application
Figure 4 for Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application
Viaarxiv icon

UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs

Add code
Mar 27, 2023
Figure 1 for UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs
Figure 2 for UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs
Figure 3 for UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs
Figure 4 for UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs
Viaarxiv icon

"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction

Add code
Oct 02, 2022
Figure 1 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Figure 2 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Figure 3 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Figure 4 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Viaarxiv icon

Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability

Add code
Jul 20, 2022
Figure 1 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Figure 2 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Figure 3 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Figure 4 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Viaarxiv icon