Picture for Baeseong Park

Baeseong Park

HyperCLOVA X Technical Report

Add code
Apr 13, 2024
Viaarxiv icon

DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward Propagation

Add code
Feb 27, 2024
Figure 1 for DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward Propagation
Figure 2 for DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward Propagation
Figure 3 for DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward Propagation
Figure 4 for DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward Propagation
Viaarxiv icon

AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models

Add code
Oct 08, 2022
Figure 1 for AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models
Figure 2 for AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models
Figure 3 for AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models
Figure 4 for AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models
Viaarxiv icon

nuQmm: Quantized MatMul for Efficient Inference of Large-Scale Generative Language Models

Add code
Jun 20, 2022
Figure 1 for nuQmm: Quantized MatMul for Efficient Inference of Large-Scale Generative Language Models
Figure 2 for nuQmm: Quantized MatMul for Efficient Inference of Large-Scale Generative Language Models
Figure 3 for nuQmm: Quantized MatMul for Efficient Inference of Large-Scale Generative Language Models
Figure 4 for nuQmm: Quantized MatMul for Efficient Inference of Large-Scale Generative Language Models
Viaarxiv icon

Modulating Regularization Frequency for Efficient Compression-Aware Model Training

Add code
May 05, 2021
Figure 1 for Modulating Regularization Frequency for Efficient Compression-Aware Model Training
Figure 2 for Modulating Regularization Frequency for Efficient Compression-Aware Model Training
Figure 3 for Modulating Regularization Frequency for Efficient Compression-Aware Model Training
Figure 4 for Modulating Regularization Frequency for Efficient Compression-Aware Model Training
Viaarxiv icon

Sequential Encryption of Sparse Neural Networks Toward Optimum Representation of Irregular Sparsity

Add code
May 05, 2021
Figure 1 for Sequential Encryption of Sparse Neural Networks Toward Optimum Representation of Irregular Sparsity
Figure 2 for Sequential Encryption of Sparse Neural Networks Toward Optimum Representation of Irregular Sparsity
Figure 3 for Sequential Encryption of Sparse Neural Networks Toward Optimum Representation of Irregular Sparsity
Figure 4 for Sequential Encryption of Sparse Neural Networks Toward Optimum Representation of Irregular Sparsity
Viaarxiv icon

Q-Rater: Non-Convex Optimization for Post-Training Uniform Quantization

Add code
May 05, 2021
Figure 1 for Q-Rater: Non-Convex Optimization for Post-Training Uniform Quantization
Figure 2 for Q-Rater: Non-Convex Optimization for Post-Training Uniform Quantization
Figure 3 for Q-Rater: Non-Convex Optimization for Post-Training Uniform Quantization
Figure 4 for Q-Rater: Non-Convex Optimization for Post-Training Uniform Quantization
Viaarxiv icon

Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation

Add code
Oct 13, 2020
Figure 1 for Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation
Figure 2 for Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation
Figure 3 for Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation
Figure 4 for Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation
Viaarxiv icon

FleXOR: Trainable Fractional Quantization

Add code
Sep 09, 2020
Figure 1 for FleXOR: Trainable Fractional Quantization
Figure 2 for FleXOR: Trainable Fractional Quantization
Figure 3 for FleXOR: Trainable Fractional Quantization
Figure 4 for FleXOR: Trainable Fractional Quantization
Viaarxiv icon

BiQGEMM: Matrix Multiplication with Lookup Table For Binary-Coding-based Quantized DNNs

Add code
May 20, 2020
Figure 1 for BiQGEMM: Matrix Multiplication with Lookup Table For Binary-Coding-based Quantized DNNs
Figure 2 for BiQGEMM: Matrix Multiplication with Lookup Table For Binary-Coding-based Quantized DNNs
Figure 3 for BiQGEMM: Matrix Multiplication with Lookup Table For Binary-Coding-based Quantized DNNs
Figure 4 for BiQGEMM: Matrix Multiplication with Lookup Table For Binary-Coding-based Quantized DNNs
Viaarxiv icon