Alert button
Picture for Mong Li Lee

Mong Li Lee

Alert button

SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection

Add code
Bookmark button
Alert button
Mar 05, 2024
Peng Qi, Zehong Yan, Wynne Hsu, Mong Li Lee

Figure 1 for SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection
Figure 2 for SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection
Figure 3 for SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection
Figure 4 for SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection
Viaarxiv icon

Leveraging Old Knowledge to Continually Learn New Classes in Medical Images

Add code
Bookmark button
Alert button
Mar 24, 2023
Evelyn Chee, Mong Li Lee, Wynne Hsu

Figure 1 for Leveraging Old Knowledge to Continually Learn New Classes in Medical Images
Figure 2 for Leveraging Old Knowledge to Continually Learn New Classes in Medical Images
Figure 3 for Leveraging Old Knowledge to Continually Learn New Classes in Medical Images
Figure 4 for Leveraging Old Knowledge to Continually Learn New Classes in Medical Images
Viaarxiv icon

Distributional Shifts in Automated Diabetic Retinopathy Screening

Add code
Bookmark button
Alert button
Jul 25, 2021
Jay Nandy, Wynne Hsu, Mong Li Lee

Figure 1 for Distributional Shifts in Automated Diabetic Retinopathy Screening
Figure 2 for Distributional Shifts in Automated Diabetic Retinopathy Screening
Figure 3 for Distributional Shifts in Automated Diabetic Retinopathy Screening
Figure 4 for Distributional Shifts in Automated Diabetic Retinopathy Screening
Viaarxiv icon

Towards Fully Interpretable Deep Neural Networks: Are We There Yet?

Add code
Bookmark button
Alert button
Jun 24, 2021
Sandareka Wickramanayake, Wynne Hsu, Mong Li Lee

Figure 1 for Towards Fully Interpretable Deep Neural Networks: Are We There Yet?
Figure 2 for Towards Fully Interpretable Deep Neural Networks: Are We There Yet?
Figure 3 for Towards Fully Interpretable Deep Neural Networks: Are We There Yet?
Viaarxiv icon

Adversarially Robust Classifier with Covariate Shift Adaptation

Add code
Bookmark button
Alert button
Feb 09, 2021
Jay Nandy, Sudipan Saha, Wynne Hsu, Mong Li Lee, Xiao Xiang Zhu

Figure 1 for Adversarially Robust Classifier with Covariate Shift Adaptation
Figure 2 for Adversarially Robust Classifier with Covariate Shift Adaptation
Figure 3 for Adversarially Robust Classifier with Covariate Shift Adaptation
Figure 4 for Adversarially Robust Classifier with Covariate Shift Adaptation
Viaarxiv icon

Learning Semantically Meaningful Features for Interpretable Classifications

Add code
Bookmark button
Alert button
Jan 11, 2021
Sandareka Wickramanayake, Wynne Hsu, Mong Li Lee

Figure 1 for Learning Semantically Meaningful Features for Interpretable Classifications
Figure 2 for Learning Semantically Meaningful Features for Interpretable Classifications
Figure 3 for Learning Semantically Meaningful Features for Interpretable Classifications
Figure 4 for Learning Semantically Meaningful Features for Interpretable Classifications
Viaarxiv icon

Towards Maximizing the Representation Gap between In-Domain \& Out-of-Distribution Examples

Add code
Bookmark button
Alert button
Oct 20, 2020
Jay Nandy, Wynne Hsu, Mong Li Lee

Figure 1 for Towards Maximizing the Representation Gap between In-Domain \& Out-of-Distribution Examples
Figure 2 for Towards Maximizing the Representation Gap between In-Domain \& Out-of-Distribution Examples
Figure 3 for Towards Maximizing the Representation Gap between In-Domain \& Out-of-Distribution Examples
Figure 4 for Towards Maximizing the Representation Gap between In-Domain \& Out-of-Distribution Examples
Viaarxiv icon

Approximate Manifold Defense Against Multiple Adversarial Perturbations

Add code
Bookmark button
Alert button
Apr 05, 2020
Jay Nandy, Wynne Hsu, Mong Li Lee

Figure 1 for Approximate Manifold Defense Against Multiple Adversarial Perturbations
Figure 2 for Approximate Manifold Defense Against Multiple Adversarial Perturbations
Figure 3 for Approximate Manifold Defense Against Multiple Adversarial Perturbations
Figure 4 for Approximate Manifold Defense Against Multiple Adversarial Perturbations
Viaarxiv icon

Normal Similarity Network for Generative Modelling

Add code
Bookmark button
Alert button
May 14, 2018
Jay Nandy, Wynne Hsu, Mong Li Lee

Figure 1 for Normal Similarity Network for Generative Modelling
Figure 2 for Normal Similarity Network for Generative Modelling
Figure 3 for Normal Similarity Network for Generative Modelling
Figure 4 for Normal Similarity Network for Generative Modelling
Viaarxiv icon

Quantifying Aspect Bias in Ordinal Ratings using a Bayesian Approach

Add code
Bookmark button
Alert button
May 24, 2017
Lahari Poddar, Wynne Hsu, Mong Li Lee

Figure 1 for Quantifying Aspect Bias in Ordinal Ratings using a Bayesian Approach
Figure 2 for Quantifying Aspect Bias in Ordinal Ratings using a Bayesian Approach
Figure 3 for Quantifying Aspect Bias in Ordinal Ratings using a Bayesian Approach
Figure 4 for Quantifying Aspect Bias in Ordinal Ratings using a Bayesian Approach
Viaarxiv icon