Picture for Brian Y. Lim

Brian Y. Lim

Incremental XAI: Memorable Understanding of AI with Incremental Explanations

Add code
Apr 10, 2024
Figure 1 for Incremental XAI: Memorable Understanding of AI with Incremental Explanations
Figure 2 for Incremental XAI: Memorable Understanding of AI with Incremental Explanations
Figure 3 for Incremental XAI: Memorable Understanding of AI with Incremental Explanations
Figure 4 for Incremental XAI: Memorable Understanding of AI with Incremental Explanations
Viaarxiv icon

IRIS: Interpretable Rubric-Informed Segmentation for Action Quality Assessment

Add code
Mar 16, 2023
Viaarxiv icon

RePrompt: Automatic Prompt Editing to Refine AI-Generative Art Towards Precise Expressions

Add code
Feb 25, 2023
Viaarxiv icon

Diagrammatization: Rationalizing with diagrammatic AI explanations for abductive reasoning on hypotheses

Add code
Feb 02, 2023
Viaarxiv icon

Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning

Add code
Jan 30, 2022
Viaarxiv icon

Towards Relatable Explainable AI with the Perceptual Process

Add code
Dec 28, 2021
Figure 1 for Towards Relatable Explainable AI with the Perceptual Process
Figure 2 for Towards Relatable Explainable AI with the Perceptual Process
Figure 3 for Towards Relatable Explainable AI with the Perceptual Process
Figure 4 for Towards Relatable Explainable AI with the Perceptual Process
Viaarxiv icon

Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation

Add code
Sep 21, 2021
Figure 1 for Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation
Figure 2 for Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation
Figure 3 for Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation
Figure 4 for Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation
Viaarxiv icon

Exploiting Explanations for Model Inversion Attacks

Add code
Apr 26, 2021
Figure 1 for Exploiting Explanations for Model Inversion Attacks
Figure 2 for Exploiting Explanations for Model Inversion Attacks
Figure 3 for Exploiting Explanations for Model Inversion Attacks
Figure 4 for Exploiting Explanations for Model Inversion Attacks
Viaarxiv icon

Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations

Add code
Jan 23, 2021
Figure 1 for Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations
Figure 2 for Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations
Figure 3 for Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations
Figure 4 for Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations
Viaarxiv icon

Debiased-CAM for bias-agnostic faithful visual explanations of deep convolutional networks

Add code
Dec 10, 2020
Figure 1 for Debiased-CAM for bias-agnostic faithful visual explanations of deep convolutional networks
Figure 2 for Debiased-CAM for bias-agnostic faithful visual explanations of deep convolutional networks
Figure 3 for Debiased-CAM for bias-agnostic faithful visual explanations of deep convolutional networks
Figure 4 for Debiased-CAM for bias-agnostic faithful visual explanations of deep convolutional networks
Viaarxiv icon