Alert button
Picture for Joonsang Yu

Joonsang Yu

Alert button

GeNAS: Neural Architecture Search with Better Generalization

May 18, 2023
Joonhyun Jeong, Joonsang Yu, Geondo Park, Dongyoon Han, YoungJoon Yoo

Figure 1 for GeNAS: Neural Architecture Search with Better Generalization
Figure 2 for GeNAS: Neural Architecture Search with Better Generalization
Figure 3 for GeNAS: Neural Architecture Search with Better Generalization
Figure 4 for GeNAS: Neural Architecture Search with Better Generalization

Neural Architecture Search (NAS) aims to automatically excavate the optimal network architecture with superior test performance. Recent neural architecture search (NAS) approaches rely on validation loss or accuracy to find the superior network for the target data. In this paper, we investigate a new neural architecture search measure for excavating architectures with better generalization. We demonstrate that the flatness of the loss surface can be a promising proxy for predicting the generalization capability of neural network architectures. We evaluate our proposed method on various search spaces, showing similar or even better performance compared to the state-of-the-art NAS methods. Notably, the resultant architecture found by flatness measure generalizes robustly to various shifts in data distribution (e.g. ImageNet-V2,-A,-O), as well as various tasks such as object detection and semantic segmentation. Code is available at https://github.com/clovaai/GeNAS.

* Accepted by IJCAI2023 
Viaarxiv icon

Pipe-BD: Pipelined Parallel Blockwise Distillation

Jan 29, 2023
Hongsun Jang, Jaewon Jung, Jaeyong Song, Joonsang Yu, Youngsok Kim, Jinho Lee

Figure 1 for Pipe-BD: Pipelined Parallel Blockwise Distillation
Figure 2 for Pipe-BD: Pipelined Parallel Blockwise Distillation
Figure 3 for Pipe-BD: Pipelined Parallel Blockwise Distillation
Figure 4 for Pipe-BD: Pipelined Parallel Blockwise Distillation

Training large deep neural network models is highly challenging due to their tremendous computational and memory requirements. Blockwise distillation provides one promising method towards faster convergence by splitting a large model into multiple smaller models. In state-of-the-art blockwise distillation methods, training is performed block-by-block in a data-parallel manner using multiple GPUs. To produce inputs for the student blocks, the teacher model is executed from the beginning until the current block under training. However, this results in a high overhead of redundant teacher execution, low GPU utilization, and extra data loading. To address these problems, we propose Pipe-BD, a novel parallelization method for blockwise distillation. Pipe-BD aggressively utilizes pipeline parallelism for blockwise distillation, eliminating redundant teacher block execution and increasing per-device batch size for better resource utilization. We also extend to hybrid parallelism for efficient workload balancing. As a result, Pipe-BD achieves significant acceleration without modifying the mathematical formulation of blockwise distillation. We implement Pipe-BD on PyTorch, and experiments reveal that Pipe-BD is effective on multiple scenarios, models, and datasets.

* To appear at DATE'23 
Viaarxiv icon

Enabling Hard Constraints in Differentiable Neural Network and Accelerator Co-Exploration

Jan 23, 2023
Deokki Hong, Kanghyun Choi, Hye Yoon Lee, Joonsang Yu, Noseong Park, Youngsok Kim, Jinho Lee

Figure 1 for Enabling Hard Constraints in Differentiable Neural Network and Accelerator Co-Exploration
Figure 2 for Enabling Hard Constraints in Differentiable Neural Network and Accelerator Co-Exploration
Figure 3 for Enabling Hard Constraints in Differentiable Neural Network and Accelerator Co-Exploration
Figure 4 for Enabling Hard Constraints in Differentiable Neural Network and Accelerator Co-Exploration

Co-exploration of an optimal neural architecture and its hardware accelerator is an approach of rising interest which addresses the computational cost problem, especially in low-profile systems. The large co-exploration space is often handled by adopting the idea of differentiable neural architecture search. However, despite the superior search efficiency of the differentiable co-exploration, it faces a critical challenge of not being able to systematically satisfy hard constraints such as frame rate. To handle the hard constraint problem of differentiable co-exploration, we propose HDX, which searches for hard-constrained solutions without compromising the global design objectives. By manipulating the gradients in the interest of the given hard constraint, high-quality solutions satisfying the constraint can be obtained.

* publisehd at DAC'22 
Viaarxiv icon

Rediscovery of the Effectiveness of Standard Convolution for Lightweight Face Detection

Apr 04, 2022
Joonhyun Jeong, Beomyoung Kim, Joonsang Yu, Youngjoon Yoo

Figure 1 for Rediscovery of the Effectiveness of Standard Convolution for Lightweight Face Detection
Figure 2 for Rediscovery of the Effectiveness of Standard Convolution for Lightweight Face Detection
Figure 3 for Rediscovery of the Effectiveness of Standard Convolution for Lightweight Face Detection
Figure 4 for Rediscovery of the Effectiveness of Standard Convolution for Lightweight Face Detection

This paper analyses the design choices of face detection architecture that improve efficiency between computation cost and accuracy. Specifically, we re-examine the effectiveness of the standard convolutional block as a lightweight backbone architecture on face detection. Unlike the current tendency of lightweight architecture design, which heavily utilizes depthwise separable convolution layers, we show that heavily channel-pruned standard convolution layer can achieve better accuracy and inference speed when using a similar parameter size. This observation is supported by the analyses concerning the characteristics of the target data domain, face. Based on our observation, we propose to employ ResNet with a highly reduced channel, which surprisingly allows high efficiency compared to other mobile-friendly networks (e.g., MobileNet-V1,-V2,-V3). From the extensive experiments, we show that the proposed backbone can replace that of the state-of-the-art face detector with a faster inference speed. Also, we further propose a new feature aggregation method maximizing the detection performance. Our proposed detector EResFD obtained 80.4% mAP on WIDER FACE Hard subset which only takes 37.7 ms for VGA image inference in on CPU. Code will be available at https://github.com/clovaai/EResFD.

Viaarxiv icon

It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher

Apr 01, 2022
Kanghyun Choi, Hye Yoon Lee, Deokki Hong, Joonsang Yu, Noseong Park, Youngsok Kim, Jinho Lee

Figure 1 for It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher
Figure 2 for It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher
Figure 3 for It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher
Figure 4 for It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher

Model quantization is considered as a promising method to greatly reduce the resource requirements of deep neural networks. To deal with the performance drop induced by quantization errors, a popular method is to use training data to fine-tune quantized networks. In real-world environments, however, such a method is frequently infeasible because training data is unavailable due to security, privacy, or confidentiality concerns. Zero-shot quantization addresses such problems, usually by taking information from the weights of a full-precision teacher network to compensate the performance drop of the quantized networks. In this paper, we first analyze the loss surface of state-of-the-art zero-shot quantization techniques and provide several findings. In contrast to usual knowledge distillation problems, zero-shot quantization often suffers from 1) the difficulty of optimizing multiple loss terms together, and 2) the poor generalization capability due to the use of synthetic samples. Furthermore, we observe that many weights fail to cross the rounding threshold during training the quantized networks even when it is necessary to do so for better performance. Based on the observations, we propose AIT, a simple yet powerful technique for zero-shot quantization, which addresses the aforementioned two problems in the following way: AIT i) uses a KL distance loss only without a cross-entropy loss, and ii) manipulates gradients to guarantee that a certain portion of weights are properly updated after crossing the rounding thresholds. Experiments show that AIT outperforms the performance of many existing methods by a great margin, taking over the overall state-of-the-art position in the field.

* selected for an oral presentation at CVPR 2022 
Viaarxiv icon

NN-LUT: Neural Approximation of Non-Linear Operations for Efficient Transformer Inference

Dec 03, 2021
Joonsang Yu, Junki Park, Seongmin Park, Minsoo Kim, Sihwa Lee, Dong Hyun Lee, Jungwook Choi

Figure 1 for NN-LUT: Neural Approximation of Non-Linear Operations for Efficient Transformer Inference
Figure 2 for NN-LUT: Neural Approximation of Non-Linear Operations for Efficient Transformer Inference
Figure 3 for NN-LUT: Neural Approximation of Non-Linear Operations for Efficient Transformer Inference
Figure 4 for NN-LUT: Neural Approximation of Non-Linear Operations for Efficient Transformer Inference

Non-linear operations such as GELU, Layer normalization, and Softmax are essential yet costly building blocks of Transformer models. Several prior works simplified these operations with look-up tables or integer computations, but such approximations suffer inferior accuracy or considerable hardware cost with long latency. This paper proposes an accurate and hardware-friendly approximation framework for efficient Transformer inference. Our framework employs a simple neural network as a universal approximator with its structure equivalently transformed into a LUT. The proposed framework called NN-LUT can accurately replace all the non-linear operations in popular BERT models with significant reductions in area, power consumption, and latency.

* 7 pages, 3 figures 
Viaarxiv icon

DANCE: Differentiable Accelerator/Network Co-Exploration

Sep 14, 2020
Kanghyun Choi, Deokki Hong, Hojae Yoon, Joonsang Yu, Youngsok Kim, Jinho Lee

Figure 1 for DANCE: Differentiable Accelerator/Network Co-Exploration
Figure 2 for DANCE: Differentiable Accelerator/Network Co-Exploration
Figure 3 for DANCE: Differentiable Accelerator/Network Co-Exploration
Figure 4 for DANCE: Differentiable Accelerator/Network Co-Exploration

To cope with the ever-increasing computational demand of the DNN execution, recent neural architecture search (NAS) algorithms consider hardware cost metrics into account, such as GPU latency. To further pursue a fast, efficient execution, DNN-specialized hardware accelerators are being designed for multiple purposes, which far-exceeds the efficiency of the GPUs. However, those hardware-related metrics have been proven to exhibit non-linear relationships with the network architectures. Therefore it became a chicken-and-egg problem to optimize the network against the accelerator, or to optimize the accelerator against the network. In such circumstances, this work presents DANCE, a differentiable approach towards the co-exploration of the hardware accelerator and network architecture design. At the heart of DANCE is a differentiable evaluator network. By modeling the hardware evaluation software with a neural network, the relation between the accelerator architecture and the hardware metrics becomes differentiable, allowing the search to be performed with backpropagation. Compared to the naive existing approaches, our method performs co-exploration in a significantly shorter time, while achieving superior accuracy and hardware cost metrics.

Viaarxiv icon

Network Recasting: A Universal Method for Network Architecture Transformation

Sep 14, 2018
Joonsang Yu, Sungbum Kang, Kiyoung Choi

Figure 1 for Network Recasting: A Universal Method for Network Architecture Transformation
Figure 2 for Network Recasting: A Universal Method for Network Architecture Transformation
Figure 3 for Network Recasting: A Universal Method for Network Architecture Transformation
Figure 4 for Network Recasting: A Universal Method for Network Architecture Transformation

This paper proposes network recasting as a general method for network architecture transformation. The primary goal of this method is to accelerate the inference process through the transformation, but there can be many other practical applications. The method is based on block-wise recasting; it recasts each source block in a pre-trained teacher network to a target block in a student network. For the recasting, a target block is trained such that its output activation approximates that of the source block. Such a block-by-block recasting in a sequential manner transforms the network architecture while preserving the accuracy. This method can be used to transform an arbitrary teacher network type to an arbitrary student network type. It can even generate a mixed-architecture network that consists of two or more types of block. The network recasting can generate a network with fewer parameters and/or activations, which reduce the inference time significantly. Naturally, it can be used for network compression by recasting a trained network into a smaller network of the same type. Our experiments show that it outperforms previous compression approaches in terms of actual speedup on a GPU.

Viaarxiv icon