Alert button
Picture for Sunnie S. Y. Kim

Sunnie S. Y. Kim

Alert button

Allowing humans to interactively guide machines where to look does not always improve human-AI team's classification accuracy

Add code
Bookmark button
Alert button
Apr 14, 2024
Giang Nguyen, Mohammad Reza Taesiri, Sunnie S. Y. Kim, Anh Nguyen

Viaarxiv icon

Allowing humans to interactively guide machines where to look does not always improve a human-AI team's classification accuracy

Add code
Bookmark button
Alert button
Apr 08, 2024
Giang Nguyen, Mohammad Reza Taesiri, Sunnie S. Y. Kim, Anh Nguyen

Viaarxiv icon

WiCV@CVPR2023: The Eleventh Women In Computer Vision Workshop at the Annual CVPR Conference

Add code
Bookmark button
Alert button
Sep 22, 2023
Doris Antensteiner, Marah Halawa, Asra Aslam, Ivaxi Sheth, Sachini Herath, Ziqi Huang, Sunnie S. Y. Kim, Aparna Akula, Xin Wang

Figure 1 for WiCV@CVPR2023: The Eleventh Women In Computer Vision Workshop at the Annual CVPR Conference
Figure 2 for WiCV@CVPR2023: The Eleventh Women In Computer Vision Workshop at the Annual CVPR Conference
Viaarxiv icon

Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application

Add code
Bookmark button
Alert button
May 15, 2023
Sunnie S. Y. Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, Andrés Monroy-Hernández

Figure 1 for Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application
Figure 2 for Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application
Figure 3 for Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application
Figure 4 for Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application
Viaarxiv icon

UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs

Add code
Bookmark button
Alert button
Mar 27, 2023
Vikram V. Ramaswamy, Sunnie S. Y. Kim, Ruth Fong, Olga Russakovsky

Figure 1 for UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs
Figure 2 for UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs
Figure 3 for UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs
Figure 4 for UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs
Viaarxiv icon

"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction

Add code
Bookmark button
Alert button
Oct 02, 2022
Sunnie S. Y. Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, Andrés Monroy-Hernández

Figure 1 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Figure 2 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Figure 3 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Figure 4 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Viaarxiv icon

Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability

Add code
Bookmark button
Alert button
Jul 20, 2022
Vikram V. Ramaswamy, Sunnie S. Y. Kim, Ruth Fong, Olga Russakovsky

Figure 1 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Figure 2 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Figure 3 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Figure 4 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Viaarxiv icon

ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features

Add code
Bookmark button
Alert button
Jun 16, 2022
Vikram V. Ramaswamy, Sunnie S. Y. Kim, Nicole Meister, Ruth Fong, Olga Russakovsky

Figure 1 for ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features
Figure 2 for ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features
Figure 3 for ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features
Figure 4 for ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features
Viaarxiv icon

HIVE: Evaluating the Human Interpretability of Visual Explanations

Add code
Bookmark button
Alert button
Jan 10, 2022
Sunnie S. Y. Kim, Nicole Meister, Vikram V. Ramaswamy, Ruth Fong, Olga Russakovsky

Figure 1 for HIVE: Evaluating the Human Interpretability of Visual Explanations
Figure 2 for HIVE: Evaluating the Human Interpretability of Visual Explanations
Figure 3 for HIVE: Evaluating the Human Interpretability of Visual Explanations
Figure 4 for HIVE: Evaluating the Human Interpretability of Visual Explanations
Viaarxiv icon