Alert button
Picture for Hoki Kim

Hoki Kim

Alert button

Fair Sampling in Diffusion Models through Switching Mechanism

Add code
Bookmark button
Alert button
Jan 09, 2024
Yujin Choi, Jinseong Park, Hoki Kim, Jaewook Lee, Saeroom Park

Viaarxiv icon

Differentially Private Sharpness-Aware Training

Add code
Bookmark button
Alert button
Jun 09, 2023
Jinseong Park, Hoki Kim, Yujin Choi, Jaewook Lee

Figure 1 for Differentially Private Sharpness-Aware Training
Figure 2 for Differentially Private Sharpness-Aware Training
Figure 3 for Differentially Private Sharpness-Aware Training
Figure 4 for Differentially Private Sharpness-Aware Training
Viaarxiv icon

Stability Analysis of Sharpness-Aware Minimization

Add code
Bookmark button
Alert button
Jan 16, 2023
Hoki Kim, Jinseong Park, Yujin Choi, Jaewook Lee

Figure 1 for Stability Analysis of Sharpness-Aware Minimization
Figure 2 for Stability Analysis of Sharpness-Aware Minimization
Figure 3 for Stability Analysis of Sharpness-Aware Minimization
Figure 4 for Stability Analysis of Sharpness-Aware Minimization
Viaarxiv icon

Comment on Transferability and Input Transformation with Additive Noise

Add code
Bookmark button
Alert button
Jun 18, 2022
Hoki Kim, Jinseong Park, Jaewook Lee

Viaarxiv icon

Bridged Adversarial Training

Add code
Bookmark button
Alert button
Aug 25, 2021
Hoki Kim, Woojin Lee, Sungyoon Lee, Jaewook Lee

Figure 1 for Bridged Adversarial Training
Figure 2 for Bridged Adversarial Training
Figure 3 for Bridged Adversarial Training
Figure 4 for Bridged Adversarial Training
Viaarxiv icon

GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization

Add code
Bookmark button
Alert button
Jul 06, 2021
Sungyoon Lee, Hoki Kim, Jaewook Lee

Figure 1 for GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization
Figure 2 for GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization
Figure 3 for GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization
Figure 4 for GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization
Viaarxiv icon

Torchattacks : A Pytorch Repository for Adversarial Attacks

Add code
Bookmark button
Alert button
Oct 06, 2020
Hoki Kim

Viaarxiv icon

Understanding Catastrophic Overfitting in Single-step Adversarial Training

Add code
Bookmark button
Alert button
Oct 05, 2020
Hoki Kim, Woojin Lee, Jaewook Lee

Figure 1 for Understanding Catastrophic Overfitting in Single-step Adversarial Training
Figure 2 for Understanding Catastrophic Overfitting in Single-step Adversarial Training
Figure 3 for Understanding Catastrophic Overfitting in Single-step Adversarial Training
Figure 4 for Understanding Catastrophic Overfitting in Single-step Adversarial Training
Viaarxiv icon