Alert button
Picture for Minsoo Kim

Minsoo Kim

Alert button

Enhancing Computation Efficiency in Large Language Models through Weight and Activation Quantization

Nov 09, 2023
Jangwhan Lee, Minsoo Kim, Seungcheol Baek, Seok Joong Hwang, Wonyong Sung, Jungwook Choi

Large Language Models (LLMs) are proficient in natural language processing tasks, but their deployment is often restricted by extensive parameter sizes and computational demands. This paper focuses on post-training quantization (PTQ) in LLMs, specifically 4-bit weight and 8-bit activation (W4A8) quantization, to enhance computational efficiency -- a topic less explored compared to weight-only quantization. We present two innovative techniques: activation-quantization-aware scaling (AQAS) and sequence-length-aware calibration (SLAC) to enhance PTQ by considering the combined effects on weights and activations and aligning calibration sequence lengths to target tasks. Moreover, we introduce dINT, a hybrid data format combining integer and denormal representations, to address the underflow issue in W4A8 quantization, where small values are rounded to zero. Through rigorous evaluations of LLMs, including OPT and LLaMA, we demonstrate that our techniques significantly boost task accuracies to levels comparable with full-precision models. By developing arithmetic units compatible with dINT, we further confirm that our methods yield a 2$\times$ hardware efficiency improvement compared to 8-bit integer MAC unit.

* EMNLP 2023 Main Conference 
Viaarxiv icon

Token-Scaled Logit Distillation for Ternary Weight Generative Language Models

Aug 13, 2023
Minsoo Kim, Sihwa Lee, Janghwan Lee, Sukjin Hong, Du-Seong Chang, Wonyong Sung, Jungwook Choi

Figure 1 for Token-Scaled Logit Distillation for Ternary Weight Generative Language Models
Figure 2 for Token-Scaled Logit Distillation for Ternary Weight Generative Language Models
Figure 3 for Token-Scaled Logit Distillation for Ternary Weight Generative Language Models
Figure 4 for Token-Scaled Logit Distillation for Ternary Weight Generative Language Models

Generative Language Models (GLMs) have shown impressive performance in tasks such as text generation, understanding, and reasoning. However, the large model size poses challenges for practical deployment. To solve this problem, Quantization-Aware Training (QAT) has become increasingly popular. However, current QAT methods for generative models have resulted in a noticeable loss of accuracy. To counteract this issue, we propose a novel knowledge distillation method specifically designed for GLMs. Our method, called token-scaled logit distillation, prevents overfitting and provides superior learning from the teacher model and ground truth. This research marks the first evaluation of ternary weight quantization-aware training of large-scale GLMs with less than 1.0 degradation in perplexity and no loss of accuracy in a reasoning task.

Viaarxiv icon

Self-supervised Equality Embedded Deep Lagrange Dual for Approximate Constrained Optimization

Jul 02, 2023
Minsoo Kim, Hongseok Kim

Figure 1 for Self-supervised Equality Embedded Deep Lagrange Dual for Approximate Constrained Optimization
Figure 2 for Self-supervised Equality Embedded Deep Lagrange Dual for Approximate Constrained Optimization
Figure 3 for Self-supervised Equality Embedded Deep Lagrange Dual for Approximate Constrained Optimization
Figure 4 for Self-supervised Equality Embedded Deep Lagrange Dual for Approximate Constrained Optimization

Conventional solvers are often computationally expensive for constrained optimization, particularly in large-scale and time-critical problems. While this leads to a growing interest in using neural networks (NNs) as fast optimal solution approximators, incorporating the constraints with NNs is challenging. In this regard, we propose deep Lagrange dual with equality embedding (DeepLDE), a framework that learns to find an optimal solution without using labels. To ensure feasible solutions, we embed equality constraints into the NNs and train the NNs using the primal-dual method to impose inequality constraints. Furthermore, we prove the convergence of DeepLDE and show that the primal-dual learning method alone cannot ensure equality constraints without the help of equality embedding. Simulation results on convex, non-convex, and AC optimal power flow (AC-OPF) problems show that the proposed DeepLDE achieves the smallest optimality gap among all the NN-based approaches while always ensuring feasible solutions. Furthermore, the computation time of the proposed method is about 5 to 250 times faster than DC3 and the conventional solvers in solving constrained convex, non-convex optimization, and/or AC-OPF.

* 11 pages, 5 figures 
Viaarxiv icon

Meta-Explore: Exploratory Hierarchical Vision-and-Language Navigation Using Scene Object Spectrum Grounding

Mar 07, 2023
Minyoung Hwang, Jaeyeon Jeong, Minsoo Kim, Yoonseon Oh, Songhwai Oh

Figure 1 for Meta-Explore: Exploratory Hierarchical Vision-and-Language Navigation Using Scene Object Spectrum Grounding
Figure 2 for Meta-Explore: Exploratory Hierarchical Vision-and-Language Navigation Using Scene Object Spectrum Grounding
Figure 3 for Meta-Explore: Exploratory Hierarchical Vision-and-Language Navigation Using Scene Object Spectrum Grounding
Figure 4 for Meta-Explore: Exploratory Hierarchical Vision-and-Language Navigation Using Scene Object Spectrum Grounding

The main challenge in vision-and-language navigation (VLN) is how to understand natural-language instructions in an unseen environment. The main limitation of conventional VLN algorithms is that if an action is mistaken, the agent fails to follow the instructions or explores unnecessary regions, leading the agent to an irrecoverable path. To tackle this problem, we propose Meta-Explore, a hierarchical navigation method deploying an exploitation policy to correct misled recent actions. We show that an exploitation policy, which moves the agent toward a well-chosen local goal among unvisited but observable states, outperforms a method which moves the agent to a previously visited state. We also highlight the demand for imagining regretful explorations with semantically meaningful clues. The key to our approach is understanding the object placements around the agent in spectral-domain. Specifically, we present a novel visual representation, called scene object spectrum (SOS), which performs category-wise 2D Fourier transform of detected objects. Combining exploitation policy and SOS features, the agent can correct its path by choosing a promising local goal. We evaluate our method in three VLN benchmarks: R2R, SOON, and REVERIE. Meta-Explore outperforms other baselines and shows significant generalization performance. In addition, local goal search using the proposed spectral-domain SOS features significantly improves the success rate by 17.1% and SPL by 20.6% for the SOON benchmark.

* Accepted by CVPR 2023. Project page: https://rllab-snu.github.io/projects/Meta-Explore/doc.html 
Viaarxiv icon

Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers

Feb 23, 2023
Minsoo Kim, Kyuhong Shim, Seongmin Park, Wonyong Sung, Jungwook Choi

Figure 1 for Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers
Figure 2 for Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers
Figure 3 for Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers
Figure 4 for Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers

Pre-trained Transformer models such as BERT have shown great success in a wide range of applications, but at the cost of substantial increases in model complexity. Quantization-aware training (QAT) is a promising method to lower the implementation cost and energy consumption. However, aggressive quantization below 2-bit causes considerable accuracy degradation due to unstable convergence, especially when the downstream dataset is not abundant. This work proposes a proactive knowledge distillation method called Teacher Intervention (TI) for fast converging QAT of ultra-low precision pre-trained Transformers. TI intervenes layer-wise signal propagation with the intact signal from the teacher to remove the interference of propagated quantization errors, smoothing loss surface of QAT and expediting the convergence. Furthermore, we propose a gradual intervention mechanism to stabilize the recovery of subsections of Transformer layers from quantization. The proposed schemes enable fast convergence of QAT and improve the model accuracy regardless of the diverse characteristics of downstream fine-tuning tasks. We demonstrate that TI consistently achieves superior accuracy with significantly lower fine-tuning iterations on well-known Transformers of natural language processing as well as computer vision compared to the state-of-the-art QAT methods.

* Accepted to EACL 2023 (main conference) 
Viaarxiv icon

Understanding and Improving Knowledge Distillation for Quantization-Aware Training of Large Transformer Encoders

Nov 20, 2022
Minsoo Kim, Sihwa Lee, Sukjin Hong, Du-Seong Chang, Jungwook Choi

Figure 1 for Understanding and Improving Knowledge Distillation for Quantization-Aware Training of Large Transformer Encoders
Figure 2 for Understanding and Improving Knowledge Distillation for Quantization-Aware Training of Large Transformer Encoders
Figure 3 for Understanding and Improving Knowledge Distillation for Quantization-Aware Training of Large Transformer Encoders
Figure 4 for Understanding and Improving Knowledge Distillation for Quantization-Aware Training of Large Transformer Encoders

Knowledge distillation (KD) has been a ubiquitous method for model compression to strengthen the capability of a lightweight model with the transferred knowledge from the teacher. In particular, KD has been employed in quantization-aware training (QAT) of Transformer encoders like BERT to improve the accuracy of the student model with the reduced-precision weight parameters. However, little is understood about which of the various KD approaches best fits the QAT of Transformers. In this work, we provide an in-depth analysis of the mechanism of KD on attention recovery of quantized large Transformers. In particular, we reveal that the previously adopted MSE loss on the attention score is insufficient for recovering the self-attention information. Therefore, we propose two KD methods; attention-map and attention-output losses. Furthermore, we explore the unification of both losses to address task-dependent preference between attention-map and output losses. The experimental results on various Transformer encoder models demonstrate that the proposed KD methods achieve state-of-the-art accuracy for QAT with sub-2-bit weight quantization.

* EMNLP 2022 Main Track Long Paper 
Viaarxiv icon

Privacy-Preserving Text Classification on BERT Embeddings with Homomorphic Encryption

Oct 05, 2022
Garam Lee, Minsoo Kim, Jai Hyun Park, Seung-won Hwang, Jung Hee Cheon

Figure 1 for Privacy-Preserving Text Classification on BERT Embeddings with Homomorphic Encryption
Figure 2 for Privacy-Preserving Text Classification on BERT Embeddings with Homomorphic Encryption
Figure 3 for Privacy-Preserving Text Classification on BERT Embeddings with Homomorphic Encryption
Figure 4 for Privacy-Preserving Text Classification on BERT Embeddings with Homomorphic Encryption

Embeddings, which compress information in raw text into semantics-preserving low-dimensional vectors, have been widely adopted for their efficacy. However, recent research has shown that embeddings can potentially leak private information about sensitive attributes of the text, and in some cases, can be inverted to recover the original input text. To address these growing privacy challenges, we propose a privatization mechanism for embeddings based on homomorphic encryption, to prevent potential leakage of any piece of information in the process of text classification. In particular, our method performs text classification on the encryption of embeddings from state-of-the-art models like BERT, supported by an efficient GPU implementation of CKKS encryption scheme. We show that our method offers encrypted protection of BERT embeddings, while largely preserving their utility on downstream text classification tasks.

* NAACL 2022 
Viaarxiv icon

SR-GCL: Session-Based Recommendation with Global Context Enhanced Augmentation in Contrastive Learning

Sep 23, 2022
Eunkyu Oh, Taehun Kim, Minsoo Kim, Yunhu Ji, Sushil Khyalia

Figure 1 for SR-GCL: Session-Based Recommendation with Global Context Enhanced Augmentation in Contrastive Learning
Figure 2 for SR-GCL: Session-Based Recommendation with Global Context Enhanced Augmentation in Contrastive Learning
Figure 3 for SR-GCL: Session-Based Recommendation with Global Context Enhanced Augmentation in Contrastive Learning
Figure 4 for SR-GCL: Session-Based Recommendation with Global Context Enhanced Augmentation in Contrastive Learning

Session-based recommendations aim to predict the next behavior of users based on ongoing sessions. The previous works have been modeling the session as a variable-length of a sequence of items and learning the representation of both individual items and the aggregated session. Recent research has applied graph neural networks with an attention mechanism to capture complicated item transitions and dependencies by modeling the sessions into graph-structured data. However, they still face fundamental challenges in terms of data and learning methodology such as sparse supervision signals and noisy interactions in sessions, leading to sub-optimal performance. In this paper, we propose SR-GCL, a novel contrastive learning framework for a session-based recommendation. As a crucial component of contrastive learning, we propose two global context enhanced data augmentation methods while maintaining the semantics of the original session. The extensive experiment results on two real-world E-commerce datasets demonstrate the superiority of SR-GCL as compared to other state-of-the-art methods.

* 11 pages. This paper has been accepted by DLG-AAAI'22 
Viaarxiv icon