Picture for Wonyong Sung

Wonyong Sung

Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks

Add code
Sep 30, 2020
Figure 1 for Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks
Figure 2 for Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks
Figure 3 for Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks
Figure 4 for Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks
Viaarxiv icon

S-SGD: Symmetrical Stochastic Gradient Descent with Weight Noise Injection for Reaching Flat Minima

Add code
Sep 05, 2020
Figure 1 for S-SGD: Symmetrical Stochastic Gradient Descent with Weight Noise Injection for Reaching Flat Minima
Figure 2 for S-SGD: Symmetrical Stochastic Gradient Descent with Weight Noise Injection for Reaching Flat Minima
Figure 3 for S-SGD: Symmetrical Stochastic Gradient Descent with Weight Noise Injection for Reaching Flat Minima
Figure 4 for S-SGD: Symmetrical Stochastic Gradient Descent with Weight Noise Injection for Reaching Flat Minima
Viaarxiv icon

Quantized Neural Networks: Characterization and Holistic Optimization

Add code
May 31, 2020
Figure 1 for Quantized Neural Networks: Characterization and Holistic Optimization
Figure 2 for Quantized Neural Networks: Characterization and Holistic Optimization
Figure 3 for Quantized Neural Networks: Characterization and Holistic Optimization
Figure 4 for Quantized Neural Networks: Characterization and Holistic Optimization
Viaarxiv icon

SQWA: Stochastic Quantized Weight Averaging for Improving the Generalization Capability of Low-Precision Deep Neural Networks

Add code
Feb 02, 2020
Figure 1 for SQWA: Stochastic Quantized Weight Averaging for Improving the Generalization Capability of Low-Precision Deep Neural Networks
Figure 2 for SQWA: Stochastic Quantized Weight Averaging for Improving the Generalization Capability of Low-Precision Deep Neural Networks
Figure 3 for SQWA: Stochastic Quantized Weight Averaging for Improving the Generalization Capability of Low-Precision Deep Neural Networks
Figure 4 for SQWA: Stochastic Quantized Weight Averaging for Improving the Generalization Capability of Low-Precision Deep Neural Networks
Viaarxiv icon

Empirical Analysis of Knowledge Distillation Technique for Optimization of Quantized Deep Neural Networks

Add code
Oct 05, 2019
Figure 1 for Empirical Analysis of Knowledge Distillation Technique for Optimization of Quantized Deep Neural Networks
Figure 2 for Empirical Analysis of Knowledge Distillation Technique for Optimization of Quantized Deep Neural Networks
Figure 3 for Empirical Analysis of Knowledge Distillation Technique for Optimization of Quantized Deep Neural Networks
Figure 4 for Empirical Analysis of Knowledge Distillation Technique for Optimization of Quantized Deep Neural Networks
Viaarxiv icon

Single Stream Parallelization of Recurrent Neural Networks for Low Power and Fast Inference

Add code
Mar 30, 2018
Figure 1 for Single Stream Parallelization of Recurrent Neural Networks for Low Power and Fast Inference
Figure 2 for Single Stream Parallelization of Recurrent Neural Networks for Low Power and Fast Inference
Figure 3 for Single Stream Parallelization of Recurrent Neural Networks for Low Power and Fast Inference
Figure 4 for Single Stream Parallelization of Recurrent Neural Networks for Low Power and Fast Inference
Viaarxiv icon

Structured Sparse Ternary Weight Coding of Deep Neural Networks for Efficient Hardware Implementations

Add code
Jul 01, 2017
Figure 1 for Structured Sparse Ternary Weight Coding of Deep Neural Networks for Efficient Hardware Implementations
Figure 2 for Structured Sparse Ternary Weight Coding of Deep Neural Networks for Efficient Hardware Implementations
Figure 3 for Structured Sparse Ternary Weight Coding of Deep Neural Networks for Efficient Hardware Implementations
Figure 4 for Structured Sparse Ternary Weight Coding of Deep Neural Networks for Efficient Hardware Implementations
Viaarxiv icon

Generative Knowledge Transfer for Neural Language Models

Add code
Feb 28, 2017
Figure 1 for Generative Knowledge Transfer for Neural Language Models
Figure 2 for Generative Knowledge Transfer for Neural Language Models
Figure 3 for Generative Knowledge Transfer for Neural Language Models
Figure 4 for Generative Knowledge Transfer for Neural Language Models
Viaarxiv icon

Fixed-point optimization of deep neural networks with adaptive step size retraining

Add code
Feb 27, 2017
Figure 1 for Fixed-point optimization of deep neural networks with adaptive step size retraining
Figure 2 for Fixed-point optimization of deep neural networks with adaptive step size retraining
Figure 3 for Fixed-point optimization of deep neural networks with adaptive step size retraining
Figure 4 for Fixed-point optimization of deep neural networks with adaptive step size retraining
Viaarxiv icon

Character-Level Language Modeling with Hierarchical Recurrent Neural Networks

Add code
Feb 02, 2017
Figure 1 for Character-Level Language Modeling with Hierarchical Recurrent Neural Networks
Figure 2 for Character-Level Language Modeling with Hierarchical Recurrent Neural Networks
Figure 3 for Character-Level Language Modeling with Hierarchical Recurrent Neural Networks
Figure 4 for Character-Level Language Modeling with Hierarchical Recurrent Neural Networks
Viaarxiv icon