Picture for Mengzhao Chen

Mengzhao Chen

Scaling Law for Quantization-Aware Training

Add code
May 20, 2025
Viaarxiv icon

Model Merging in Pre-training of Large Language Models

Add code
May 17, 2025
Viaarxiv icon

DanceGRPO: Unleashing GRPO on Visual Generation

Add code
May 12, 2025
Viaarxiv icon

Enhance-A-Video: Better Generated Video for Free

Add code
Feb 11, 2025
Viaarxiv icon

PrefixQuant: Static Quantization Beats Dynamic through Prefixed Outliers in LLMs

Add code
Oct 07, 2024
Figure 1 for PrefixQuant: Static Quantization Beats Dynamic through Prefixed Outliers in LLMs
Figure 2 for PrefixQuant: Static Quantization Beats Dynamic through Prefixed Outliers in LLMs
Figure 3 for PrefixQuant: Static Quantization Beats Dynamic through Prefixed Outliers in LLMs
Figure 4 for PrefixQuant: Static Quantization Beats Dynamic through Prefixed Outliers in LLMs
Viaarxiv icon

Adapting LLaMA Decoder to Vision Transformer

Add code
Apr 13, 2024
Viaarxiv icon

BESA: Pruning Large Language Models with Blockwise Parameter-Efficient Sparsity Allocation

Add code
Feb 18, 2024
Viaarxiv icon

I&S-ViT: An Inclusive & Stable Method for Pushing the Limit of Post-Training ViTs Quantization

Add code
Nov 16, 2023
Viaarxiv icon

OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models

Add code
Aug 25, 2023
Figure 1 for OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models
Figure 2 for OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models
Figure 3 for OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models
Figure 4 for OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models
Viaarxiv icon

Spatial Re-parameterization for N:M Sparsity

Add code
Jun 09, 2023
Viaarxiv icon