Picture for Jae-joon Kim

Jae-joon Kim

L4Q: Parameter Efficient Quantization-Aware Training on Large Language Models via LoRA-wise LSQ

Add code
Feb 15, 2024
Figure 1 for L4Q: Parameter Efficient Quantization-Aware Training on Large Language Models via LoRA-wise LSQ
Figure 2 for L4Q: Parameter Efficient Quantization-Aware Training on Large Language Models via LoRA-wise LSQ
Figure 3 for L4Q: Parameter Efficient Quantization-Aware Training on Large Language Models via LoRA-wise LSQ
Figure 4 for L4Q: Parameter Efficient Quantization-Aware Training on Large Language Models via LoRA-wise LSQ
Viaarxiv icon