Alert button
Picture for Changhun Lee

Changhun Lee

Alert button

A Bi-objective Perspective on Controllable Language Models: Reward Dropout Improves Off-policy Control Performance

Add code
Bookmark button
Alert button
Oct 06, 2023
Changhun Lee, Chiehyeon Lim

Viaarxiv icon

OWQ: Lessons learned from activation outliers for weight quantization in large language models

Add code
Bookmark button
Alert button
Jun 13, 2023
Changhun Lee, Jungyu Jin, Taesu Kim, Hyungjun Kim, Eunhyeok Park

Figure 1 for OWQ: Lessons learned from activation outliers for weight quantization in large language models
Figure 2 for OWQ: Lessons learned from activation outliers for weight quantization in large language models
Figure 3 for OWQ: Lessons learned from activation outliers for weight quantization in large language models
Figure 4 for OWQ: Lessons learned from activation outliers for weight quantization in large language models
Viaarxiv icon

INSTA-BNN: Binary Neural Network with INSTAnce-aware Threshold

Add code
Bookmark button
Alert button
Apr 18, 2022
Changhun Lee, Hyungjun Kim, Eunhyeok Park, Jae-Joon Kim

Figure 1 for INSTA-BNN: Binary Neural Network with INSTAnce-aware Threshold
Figure 2 for INSTA-BNN: Binary Neural Network with INSTAnce-aware Threshold
Figure 3 for INSTA-BNN: Binary Neural Network with INSTAnce-aware Threshold
Figure 4 for INSTA-BNN: Binary Neural Network with INSTAnce-aware Threshold
Viaarxiv icon

Improving Accuracy of Binary Neural Networks using Unbalanced Activation Distribution

Add code
Bookmark button
Alert button
Dec 02, 2020
Hyungjun Kim, Jihoon Park, Changhun Lee, Jae-Joon Kim

Figure 1 for Improving Accuracy of Binary Neural Networks using Unbalanced Activation Distribution
Figure 2 for Improving Accuracy of Binary Neural Networks using Unbalanced Activation Distribution
Figure 3 for Improving Accuracy of Binary Neural Networks using Unbalanced Activation Distribution
Figure 4 for Improving Accuracy of Binary Neural Networks using Unbalanced Activation Distribution
Viaarxiv icon