Picture for Changhai Zhou

Changhai Zhou

Efficient Fine-Tuning of Quantized Models via Adaptive Rank and Bitwidth

Add code
May 02, 2025
Viaarxiv icon

Large Language Model Compression with Global Rank and Sparsity Optimization

Add code
May 02, 2025
Viaarxiv icon

QPruner: Probabilistic Decision Quantization for Structured Pruning in Large Language Models

Add code
Dec 16, 2024
Viaarxiv icon

AutoMixQ: Self-Adjusting Quantization for High Performance Memory-Efficient Fine-Tuning

Add code
Nov 21, 2024
Viaarxiv icon

RankAdaptor: Hierarchical Dynamic Low-Rank Adaptation for Structural Pruned LLMs

Add code
Jun 22, 2024
Viaarxiv icon