Alert button
Picture for Jungwook Choi

Jungwook Choi

Alert button

NN-LUT: Neural Approximation of Non-Linear Operations for Efficient Transformer Inference

Add code
Bookmark button
Alert button
Dec 03, 2021
Joonsang Yu, Junki Park, Seongmin Park, Minsoo Kim, Sihwa Lee, Dong Hyun Lee, Jungwook Choi

Figure 1 for NN-LUT: Neural Approximation of Non-Linear Operations for Efficient Transformer Inference
Figure 2 for NN-LUT: Neural Approximation of Non-Linear Operations for Efficient Transformer Inference
Figure 3 for NN-LUT: Neural Approximation of Non-Linear Operations for Efficient Transformer Inference
Figure 4 for NN-LUT: Neural Approximation of Non-Linear Operations for Efficient Transformer Inference
Viaarxiv icon

Layer-wise Pruning of Transformer Attention Heads for Efficient Language Modeling

Add code
Bookmark button
Alert button
Oct 07, 2021
Kyuhong Shim, Iksoo Choi, Wonyong Sung, Jungwook Choi

Figure 1 for Layer-wise Pruning of Transformer Attention Heads for Efficient Language Modeling
Figure 2 for Layer-wise Pruning of Transformer Attention Heads for Efficient Language Modeling
Figure 3 for Layer-wise Pruning of Transformer Attention Heads for Efficient Language Modeling
Figure 4 for Layer-wise Pruning of Transformer Attention Heads for Efficient Language Modeling
Viaarxiv icon

Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead

Add code
Bookmark button
Alert button
Jan 04, 2021
Muhammad Shafique, Mahum Naseer, Theocharis Theocharides, Christos Kyrkou, Onur Mutlu, Lois Orosa, Jungwook Choi

Figure 1 for Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead
Figure 2 for Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead
Figure 3 for Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead
Figure 4 for Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead
Viaarxiv icon

Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks

Add code
Bookmark button
Alert button
Sep 30, 2020
Yoonho Boo, Sungho Shin, Jungwook Choi, Wonyong Sung

Figure 1 for Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks
Figure 2 for Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks
Figure 3 for Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks
Figure 4 for Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks
Viaarxiv icon

Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks

Add code
Bookmark button
Alert button
Jan 19, 2019
Charbel Sakr, Naigang Wang, Chia-Yu Chen, Jungwook Choi, Ankur Agrawal, Naresh Shanbhag, Kailash Gopalakrishnan

Figure 1 for Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
Figure 2 for Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
Figure 3 for Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
Figure 4 for Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
Viaarxiv icon

Training Deep Neural Networks with 8-bit Floating Point Numbers

Add code
Bookmark button
Alert button
Dec 19, 2018
Naigang Wang, Jungwook Choi, Daniel Brand, Chia-Yu Chen, Kailash Gopalakrishnan

Figure 1 for Training Deep Neural Networks with 8-bit Floating Point Numbers
Figure 2 for Training Deep Neural Networks with 8-bit Floating Point Numbers
Figure 3 for Training Deep Neural Networks with 8-bit Floating Point Numbers
Figure 4 for Training Deep Neural Networks with 8-bit Floating Point Numbers
Viaarxiv icon

Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)

Add code
Bookmark button
Alert button
Jul 17, 2018
Jungwook Choi, Pierce I-Jen Chuang, Zhuo Wang, Swagath Venkataramani, Vijayalakshmi Srinivasan, Kailash Gopalakrishnan

Figure 1 for Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Figure 2 for Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Figure 3 for Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Figure 4 for Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Viaarxiv icon

PACT: Parameterized Clipping Activation for Quantized Neural Networks

Add code
Bookmark button
Alert button
Jul 17, 2018
Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srinivasan, Kailash Gopalakrishnan

Figure 1 for PACT: Parameterized Clipping Activation for Quantized Neural Networks
Figure 2 for PACT: Parameterized Clipping Activation for Quantized Neural Networks
Figure 3 for PACT: Parameterized Clipping Activation for Quantized Neural Networks
Figure 4 for PACT: Parameterized Clipping Activation for Quantized Neural Networks
Viaarxiv icon

AdaComp : Adaptive Residual Gradient Compression for Data-Parallel Distributed Training

Add code
Bookmark button
Alert button
Dec 07, 2017
Chia-Yu Chen, Jungwook Choi, Daniel Brand, Ankur Agrawal, Wei Zhang, Kailash Gopalakrishnan

Figure 1 for AdaComp : Adaptive Residual Gradient Compression for Data-Parallel Distributed Training
Figure 2 for AdaComp : Adaptive Residual Gradient Compression for Data-Parallel Distributed Training
Figure 3 for AdaComp : Adaptive Residual Gradient Compression for Data-Parallel Distributed Training
Figure 4 for AdaComp : Adaptive Residual Gradient Compression for Data-Parallel Distributed Training
Viaarxiv icon