Alert button
Picture for Wonyong Sung

Wonyong Sung

Alert button

Enhancing Computation Efficiency in Large Language Models through Weight and Activation Quantization

Add code
Bookmark button
Alert button
Nov 09, 2023
Jangwhan Lee, Minsoo Kim, Seungcheol Baek, Seok Joong Hwang, Wonyong Sung, Jungwook Choi

Viaarxiv icon

Token-Scaled Logit Distillation for Ternary Weight Generative Language Models

Add code
Bookmark button
Alert button
Aug 13, 2023
Minsoo Kim, Sihwa Lee, Janghwan Lee, Sukjin Hong, Du-Seong Chang, Wonyong Sung, Jungwook Choi

Figure 1 for Token-Scaled Logit Distillation for Ternary Weight Generative Language Models
Figure 2 for Token-Scaled Logit Distillation for Ternary Weight Generative Language Models
Figure 3 for Token-Scaled Logit Distillation for Ternary Weight Generative Language Models
Figure 4 for Token-Scaled Logit Distillation for Ternary Weight Generative Language Models
Viaarxiv icon

Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers

Add code
Bookmark button
Alert button
Feb 23, 2023
Minsoo Kim, Kyuhong Shim, Seongmin Park, Wonyong Sung, Jungwook Choi

Figure 1 for Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers
Figure 2 for Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers
Figure 3 for Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers
Figure 4 for Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers
Viaarxiv icon

Sleep Model -- A Sequence Model for Predicting the Next Sleep Stage

Add code
Bookmark button
Alert button
Feb 17, 2023
Iksoo Choi, Wonyong Sung

Figure 1 for Sleep Model -- A Sequence Model for Predicting the Next Sleep Stage
Figure 2 for Sleep Model -- A Sequence Model for Predicting the Next Sleep Stage
Figure 3 for Sleep Model -- A Sequence Model for Predicting the Next Sleep Stage
Figure 4 for Sleep Model -- A Sequence Model for Predicting the Next Sleep Stage
Viaarxiv icon

Exploring Attention Map Reuse for Efficient Transformer Neural Networks

Add code
Bookmark button
Alert button
Jan 29, 2023
Kyuhong Shim, Jungwook Choi, Wonyong Sung

Figure 1 for Exploring Attention Map Reuse for Efficient Transformer Neural Networks
Figure 2 for Exploring Attention Map Reuse for Efficient Transformer Neural Networks
Figure 3 for Exploring Attention Map Reuse for Efficient Transformer Neural Networks
Figure 4 for Exploring Attention Map Reuse for Efficient Transformer Neural Networks
Viaarxiv icon

Macro-block dropout for improved regularization in training end-to-end speech recognition models

Add code
Bookmark button
Alert button
Dec 29, 2022
Chanwoo Kim, Sathish Indurti, Jinhwan Park, Wonyong Sung

Figure 1 for Macro-block dropout for improved regularization in training end-to-end speech recognition models
Figure 2 for Macro-block dropout for improved regularization in training end-to-end speech recognition models
Figure 3 for Macro-block dropout for improved regularization in training end-to-end speech recognition models
Figure 4 for Macro-block dropout for improved regularization in training end-to-end speech recognition models
Viaarxiv icon

A Comparison of Transformer, Convolutional, and Recurrent Neural Networks on Phoneme Recognition

Add code
Bookmark button
Alert button
Oct 01, 2022
Kyuhong Shim, Wonyong Sung

Figure 1 for A Comparison of Transformer, Convolutional, and Recurrent Neural Networks on Phoneme Recognition
Figure 2 for A Comparison of Transformer, Convolutional, and Recurrent Neural Networks on Phoneme Recognition
Figure 3 for A Comparison of Transformer, Convolutional, and Recurrent Neural Networks on Phoneme Recognition
Figure 4 for A Comparison of Transformer, Convolutional, and Recurrent Neural Networks on Phoneme Recognition
Viaarxiv icon

Korean Tokenization for Beam Search Rescoring in Speech Recognition

Add code
Bookmark button
Alert button
Mar 28, 2022
Kyuhong Shim, Hyewon Bae, Wonyong Sung

Figure 1 for Korean Tokenization for Beam Search Rescoring in Speech Recognition
Figure 2 for Korean Tokenization for Beam Search Rescoring in Speech Recognition
Figure 3 for Korean Tokenization for Beam Search Rescoring in Speech Recognition
Figure 4 for Korean Tokenization for Beam Search Rescoring in Speech Recognition
Viaarxiv icon

Similarity and Content-based Phonetic Self Attention for Speech Recognition

Add code
Bookmark button
Alert button
Mar 28, 2022
Kyuhong Shim, Wonyong Sung

Figure 1 for Similarity and Content-based Phonetic Self Attention for Speech Recognition
Figure 2 for Similarity and Content-based Phonetic Self Attention for Speech Recognition
Figure 3 for Similarity and Content-based Phonetic Self Attention for Speech Recognition
Figure 4 for Similarity and Content-based Phonetic Self Attention for Speech Recognition
Viaarxiv icon