Alert button
Picture for Taesung Lee

Taesung Lee

Alert button

Towards Generating Informative Textual Description for Neurons in Language Models

Add code
Bookmark button
Alert button
Jan 30, 2024
Shrayani Mondal, Rishabh Garodia, Arbaaz Qureshi, Taesung Lee, Youngja Park

Viaarxiv icon

URET: Universal Robustness Evaluation Toolkit (for Evasion)

Add code
Bookmark button
Alert button
Aug 03, 2023
Kevin Eykholt, Taesung Lee, Douglas Schales, Jiyong Jang, Ian Molloy, Masha Zorin

Figure 1 for URET: Universal Robustness Evaluation Toolkit (for Evasion)
Figure 2 for URET: Universal Robustness Evaluation Toolkit (for Evasion)
Figure 3 for URET: Universal Robustness Evaluation Toolkit (for Evasion)
Figure 4 for URET: Universal Robustness Evaluation Toolkit (for Evasion)
Viaarxiv icon

Matching Pairs: Attributing Fine-Tuned Models to their Pre-Trained Large Language Models

Add code
Bookmark button
Alert button
Jun 15, 2023
Myles Foley, Ambrish Rawat, Taesung Lee, Yufang Hou, Gabriele Picco, Giulio Zizzo

Figure 1 for Matching Pairs: Attributing Fine-Tuned Models to their Pre-Trained Large Language Models
Figure 2 for Matching Pairs: Attributing Fine-Tuned Models to their Pre-Trained Large Language Models
Figure 3 for Matching Pairs: Attributing Fine-Tuned Models to their Pre-Trained Large Language Models
Figure 4 for Matching Pairs: Attributing Fine-Tuned Models to their Pre-Trained Large Language Models
Viaarxiv icon

Robustness of Explanation Methods for NLP Models

Add code
Bookmark button
Alert button
Jun 24, 2022
Shriya Atmakuri, Tejas Chheda, Dinesh Kandula, Nishant Yadav, Taesung Lee, Hessel Tuinhof

Figure 1 for Robustness of Explanation Methods for NLP Models
Figure 2 for Robustness of Explanation Methods for NLP Models
Figure 3 for Robustness of Explanation Methods for NLP Models
Figure 4 for Robustness of Explanation Methods for NLP Models
Viaarxiv icon

Adaptive Verifiable Training Using Pairwise Class Similarity

Add code
Bookmark button
Alert button
Dec 14, 2020
Shiqi Wang, Kevin Eykholt, Taesung Lee, Jiyong Jang, Ian Molloy

Figure 1 for Adaptive Verifiable Training Using Pairwise Class Similarity
Figure 2 for Adaptive Verifiable Training Using Pairwise Class Similarity
Figure 3 for Adaptive Verifiable Training Using Pairwise Class Similarity
Figure 4 for Adaptive Verifiable Training Using Pairwise Class Similarity
Viaarxiv icon

A new measure for overfitting and its implications for backdooring of deep learning

Add code
Bookmark button
Alert button
Jun 18, 2020
Kathrin Grosse, Taesung Lee, Youngja Park, Michael Backes, Ian Molloy

Figure 1 for A new measure for overfitting and its implications for backdooring of deep learning
Figure 2 for A new measure for overfitting and its implications for backdooring of deep learning
Figure 3 for A new measure for overfitting and its implications for backdooring of deep learning
Figure 4 for A new measure for overfitting and its implications for backdooring of deep learning
Viaarxiv icon

Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering

Add code
Bookmark button
Alert button
Nov 09, 2018
Bryant Chen, Wilka Carvalho, Nathalie Baracaldo, Heiko Ludwig, Benjamin Edwards, Taesung Lee, Ian Molloy, Biplav Srivastava

Figure 1 for Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
Figure 2 for Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
Figure 3 for Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
Figure 4 for Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
Viaarxiv icon

Defending Against Model Stealing Attacks Using Deceptive Perturbations

Add code
Bookmark button
Alert button
Sep 19, 2018
Taesung Lee, Benjamin Edwards, Ian Molloy, Dong Su

Figure 1 for Defending Against Model Stealing Attacks Using Deceptive Perturbations
Figure 2 for Defending Against Model Stealing Attacks Using Deceptive Perturbations
Figure 3 for Defending Against Model Stealing Attacks Using Deceptive Perturbations
Figure 4 for Defending Against Model Stealing Attacks Using Deceptive Perturbations
Viaarxiv icon