Alert button
Picture for Neo Christopher Chung

Neo Christopher Chung

Alert button

Class-Discriminative Attention Maps for Vision Transformers

Add code
Bookmark button
Alert button
Dec 04, 2023
Lennart Brocki, Neo Christopher Chung

Viaarxiv icon

Challenges of Large Language Models for Mental Health Counseling

Add code
Bookmark button
Alert button
Nov 23, 2023
Neo Christopher Chung, George Dyer, Lennart Brocki

Viaarxiv icon

Integration of Radiomics and Tumor Biomarkers in Interpretable Machine Learning Models

Add code
Bookmark button
Alert button
Mar 20, 2023
Lennart Brocki, Neo Christopher Chung

Figure 1 for Integration of Radiomics and Tumor Biomarkers in Interpretable Machine Learning Models
Figure 2 for Integration of Radiomics and Tumor Biomarkers in Interpretable Machine Learning Models
Figure 3 for Integration of Radiomics and Tumor Biomarkers in Interpretable Machine Learning Models
Figure 4 for Integration of Radiomics and Tumor Biomarkers in Interpretable Machine Learning Models
Viaarxiv icon

Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators

Add code
Bookmark button
Alert button
Mar 02, 2023
Lennart Brocki, Neo Christopher Chung

Figure 1 for Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators
Figure 2 for Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators
Figure 3 for Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators
Figure 4 for Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators
Viaarxiv icon

Deep Learning Mental Health Dialogue System

Add code
Bookmark button
Alert button
Jan 23, 2023
Lennart Brocki, George C. Dyer, Anna Gładka, Neo Christopher Chung

Figure 1 for Deep Learning Mental Health Dialogue System
Figure 2 for Deep Learning Mental Health Dialogue System
Viaarxiv icon

Evaluation of importance estimators in deep learning classifiers for Computed Tomography

Add code
Bookmark button
Alert button
Sep 30, 2022
Lennart Brocki, Wistan Marchadour, Jonas Maison, Bogdan Badic, Panagiotis Papadimitroulas, Mathieu Hatt, Franck Vermet, Neo Christopher Chung

Viaarxiv icon

Evaluation of Interpretability Methods and Perturbation Artifacts in Deep Neural Networks

Add code
Bookmark button
Alert button
Mar 06, 2022
Lennart Brocki, Neo Christopher Chung

Figure 1 for Evaluation of Interpretability Methods and Perturbation Artifacts in Deep Neural Networks
Figure 2 for Evaluation of Interpretability Methods and Perturbation Artifacts in Deep Neural Networks
Figure 3 for Evaluation of Interpretability Methods and Perturbation Artifacts in Deep Neural Networks
Figure 4 for Evaluation of Interpretability Methods and Perturbation Artifacts in Deep Neural Networks
Viaarxiv icon

Human in the Loop for Machine Creativity

Add code
Bookmark button
Alert button
Oct 07, 2021
Neo Christopher Chung

Figure 1 for Human in the Loop for Machine Creativity
Viaarxiv icon

Removing Brightness Bias in Rectified Gradients

Add code
Bookmark button
Alert button
Nov 14, 2020
Lennart Brocki, Neo Christopher Chung

Figure 1 for Removing Brightness Bias in Rectified Gradients
Figure 2 for Removing Brightness Bias in Rectified Gradients
Figure 3 for Removing Brightness Bias in Rectified Gradients
Viaarxiv icon

Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models

Add code
Bookmark button
Alert button
Oct 29, 2019
Lennart Brocki, Neo Christopher Chung

Figure 1 for Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models
Figure 2 for Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models
Figure 3 for Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models
Figure 4 for Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models
Viaarxiv icon