Picture for Seung-Hoon Na

Seung-Hoon Na

ROSAQ: Rotation-based Saliency-Aware Weight Quantization for Efficiently Compressing Large Language Models

Add code
Jun 16, 2025
Figure 1 for ROSAQ: Rotation-based Saliency-Aware Weight Quantization for Efficiently Compressing Large Language Models
Figure 2 for ROSAQ: Rotation-based Saliency-Aware Weight Quantization for Efficiently Compressing Large Language Models
Figure 3 for ROSAQ: Rotation-based Saliency-Aware Weight Quantization for Efficiently Compressing Large Language Models
Figure 4 for ROSAQ: Rotation-based Saliency-Aware Weight Quantization for Efficiently Compressing Large Language Models
Viaarxiv icon

Improving Term Frequency Normalization for Multi-topical Documents, and Application to Language Modeling Approaches

Add code
Feb 08, 2015
Figure 1 for Improving Term Frequency Normalization for Multi-topical Documents, and Application to Language Modeling Approaches
Figure 2 for Improving Term Frequency Normalization for Multi-topical Documents, and Application to Language Modeling Approaches
Viaarxiv icon