Alert button
Picture for Eunjung Lee

Eunjung Lee

Alert button

AID-Purifier: A Light Auxiliary Network for Boosting Adversarial Defense

Jul 14, 2021
Duhun Hwang, Eunjung Lee, Wonjong Rhee

Figure 1 for AID-Purifier: A Light Auxiliary Network for Boosting Adversarial Defense
Figure 2 for AID-Purifier: A Light Auxiliary Network for Boosting Adversarial Defense
Figure 3 for AID-Purifier: A Light Auxiliary Network for Boosting Adversarial Defense
Figure 4 for AID-Purifier: A Light Auxiliary Network for Boosting Adversarial Defense

We propose an AID-purifier that can boost the robustness of adversarially-trained networks by purifying their inputs. AID-purifier is an auxiliary network that works as an add-on to an already trained main classifier. To keep it computationally light, it is trained as a discriminator with a binary cross-entropy loss. To obtain additionally useful information from the adversarial examples, the architecture design is closely related to information maximization principles where two layers of the main classification network are piped to the auxiliary network. To assist the iterative optimization procedure of purification, the auxiliary network is trained with AVmixup. AID-purifier can be used together with other purifiers such as PixelDefend for an extra enhancement. The overall results indicate that the best performing adversarially-trained networks can be enhanced by the best performing purification networks, where AID-purifier is a competitive candidate that is light and robust.

* ICML 2021 Workshop on Adversarial Machine Learning  
Viaarxiv icon

DEEP-BO for Hyperparameter Optimization of Deep Networks

May 23, 2019
Hyunghun Cho, Yongjin Kim, Eunjung Lee, Daeyoung Choi, Yongjae Lee, Wonjong Rhee

Figure 1 for DEEP-BO for Hyperparameter Optimization of Deep Networks
Figure 2 for DEEP-BO for Hyperparameter Optimization of Deep Networks
Figure 3 for DEEP-BO for Hyperparameter Optimization of Deep Networks
Figure 4 for DEEP-BO for Hyperparameter Optimization of Deep Networks

The performance of deep neural networks (DNN) is very sensitive to the particular choice of hyper-parameters. To make it worse, the shape of the learning curve can be significantly affected when a technique like batchnorm is used. As a result, hyperparameter optimization of deep networks can be much more challenging than traditional machine learning models. In this work, we start from well known Bayesian Optimization solutions and provide enhancement strategies specifically designed for hyperparameter optimization of deep networks. The resulting algorithm is named as DEEP-BO (Diversified, Early-termination-Enabled, and Parallel Bayesian Optimization). When evaluated over six DNN benchmarks, DEEP-BO easily outperforms or shows comparable performance with some of the well-known solutions including GP-Hedge, Hyperband, BOHB, Median Stopping Rule, and Learning Curve Extrapolation. The code used is made publicly available at https://github.com/snu-adsl/DEEP-BO.

* 26 pages, NeurIPS19 under review 
Viaarxiv icon