Alert button
Picture for Brian Y. Lim

Brian Y. Lim

Alert button

Incremental XAI: Memorable Understanding of AI with Incremental Explanations

Add code
Bookmark button
Alert button
Apr 10, 2024
Jessica Y. Bo, Pan Hao, Brian Y. Lim

Viaarxiv icon

IRIS: Interpretable Rubric-Informed Segmentation for Action Quality Assessment

Add code
Bookmark button
Alert button
Mar 16, 2023
Hitoshi Matsuyama, Nobuo Kawaguchi, Brian Y. Lim

Figure 1 for IRIS: Interpretable Rubric-Informed Segmentation for Action Quality Assessment
Figure 2 for IRIS: Interpretable Rubric-Informed Segmentation for Action Quality Assessment
Figure 3 for IRIS: Interpretable Rubric-Informed Segmentation for Action Quality Assessment
Figure 4 for IRIS: Interpretable Rubric-Informed Segmentation for Action Quality Assessment
Viaarxiv icon

RePrompt: Automatic Prompt Editing to Refine AI-Generative Art Towards Precise Expressions

Add code
Bookmark button
Alert button
Feb 25, 2023
Yunlong Wang, Shuyuan Shen, Brian Y. Lim

Figure 1 for RePrompt: Automatic Prompt Editing to Refine AI-Generative Art Towards Precise Expressions
Figure 2 for RePrompt: Automatic Prompt Editing to Refine AI-Generative Art Towards Precise Expressions
Figure 3 for RePrompt: Automatic Prompt Editing to Refine AI-Generative Art Towards Precise Expressions
Figure 4 for RePrompt: Automatic Prompt Editing to Refine AI-Generative Art Towards Precise Expressions
Viaarxiv icon

Diagrammatization: Rationalizing with diagrammatic AI explanations for abductive reasoning on hypotheses

Add code
Bookmark button
Alert button
Feb 02, 2023
Brian Y. Lim, Joseph P. Cahaly, Chester Y. F. Sng, Adam Chew

Figure 1 for Diagrammatization: Rationalizing with diagrammatic AI explanations for abductive reasoning on hypotheses
Figure 2 for Diagrammatization: Rationalizing with diagrammatic AI explanations for abductive reasoning on hypotheses
Figure 3 for Diagrammatization: Rationalizing with diagrammatic AI explanations for abductive reasoning on hypotheses
Figure 4 for Diagrammatization: Rationalizing with diagrammatic AI explanations for abductive reasoning on hypotheses
Viaarxiv icon

Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning

Add code
Bookmark button
Alert button
Jan 30, 2022
Wencan Zhang, Mariella Dimiccoli, Brian Y. Lim

Figure 1 for Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
Figure 2 for Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
Figure 3 for Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
Figure 4 for Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
Viaarxiv icon

Towards Relatable Explainable AI with the Perceptual Process

Add code
Bookmark button
Alert button
Dec 28, 2021
Wencan Zhang, Brian Y. Lim

Figure 1 for Towards Relatable Explainable AI with the Perceptual Process
Figure 2 for Towards Relatable Explainable AI with the Perceptual Process
Figure 3 for Towards Relatable Explainable AI with the Perceptual Process
Figure 4 for Towards Relatable Explainable AI with the Perceptual Process
Viaarxiv icon

Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation

Add code
Bookmark button
Alert button
Sep 21, 2021
Yunlong Wang, Priyadarshini Venkatesh, Brian Y. Lim

Figure 1 for Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation
Figure 2 for Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation
Figure 3 for Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation
Figure 4 for Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation
Viaarxiv icon

Exploiting Explanations for Model Inversion Attacks

Add code
Bookmark button
Alert button
Apr 26, 2021
Xuejun Zhao, Wencan Zhang, Xiaokui Xiao, Brian Y. Lim

Figure 1 for Exploiting Explanations for Model Inversion Attacks
Figure 2 for Exploiting Explanations for Model Inversion Attacks
Figure 3 for Exploiting Explanations for Model Inversion Attacks
Figure 4 for Exploiting Explanations for Model Inversion Attacks
Viaarxiv icon

Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations

Add code
Bookmark button
Alert button
Jan 23, 2021
Danding Wang, Wencan Zhang, Brian Y. Lim

Figure 1 for Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations
Figure 2 for Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations
Figure 3 for Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations
Figure 4 for Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations
Viaarxiv icon