Picture for Xianglong Liu

Xianglong Liu

Fast and Controllable Post-training Sparsity: Learning Optimal Sparsity Allocation with Global Constraint in Minutes

Add code
May 09, 2024
Viaarxiv icon

LLM-QBench: A Benchmark Towards the Best Practice for Post-training Quantization of Large Language Models

Add code
May 09, 2024
Figure 1 for LLM-QBench: A Benchmark Towards the Best Practice for Post-training Quantization of Large Language Models
Figure 2 for LLM-QBench: A Benchmark Towards the Best Practice for Post-training Quantization of Large Language Models
Figure 3 for LLM-QBench: A Benchmark Towards the Best Practice for Post-training Quantization of Large Language Models
Figure 4 for LLM-QBench: A Benchmark Towards the Best Practice for Post-training Quantization of Large Language Models
Viaarxiv icon

Towards Robust Physical-world Backdoor Attacks on Lane Detection

Add code
May 09, 2024
Figure 1 for Towards Robust Physical-world Backdoor Attacks on Lane Detection
Figure 2 for Towards Robust Physical-world Backdoor Attacks on Lane Detection
Figure 3 for Towards Robust Physical-world Backdoor Attacks on Lane Detection
Figure 4 for Towards Robust Physical-world Backdoor Attacks on Lane Detection
Viaarxiv icon

PTQ4SAM: Post-Training Quantization for Segment Anything

Add code
May 06, 2024
Figure 1 for PTQ4SAM: Post-Training Quantization for Segment Anything
Figure 2 for PTQ4SAM: Post-Training Quantization for Segment Anything
Figure 3 for PTQ4SAM: Post-Training Quantization for Segment Anything
Figure 4 for PTQ4SAM: Post-Training Quantization for Segment Anything
Viaarxiv icon

IntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors

Add code
May 02, 2024
Figure 1 for IntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors
Figure 2 for IntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors
Figure 3 for IntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors
Figure 4 for IntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors
Viaarxiv icon

How Good Are Low-bit Quantized LLaMA3 Models? An Empirical Study

Add code
Apr 22, 2024
Viaarxiv icon

BinaryDM: Towards Accurate Binarization of Diffusion Model

Add code
Apr 08, 2024
Figure 1 for BinaryDM: Towards Accurate Binarization of Diffusion Model
Figure 2 for BinaryDM: Towards Accurate Binarization of Diffusion Model
Figure 3 for BinaryDM: Towards Accurate Binarization of Diffusion Model
Figure 4 for BinaryDM: Towards Accurate Binarization of Diffusion Model
Viaarxiv icon

2023 Low-Power Computer Vision Challenge (LPCVC) Summary

Add code
Mar 11, 2024
Figure 1 for 2023 Low-Power Computer Vision Challenge (LPCVC) Summary
Figure 2 for 2023 Low-Power Computer Vision Challenge (LPCVC) Summary
Figure 3 for 2023 Low-Power Computer Vision Challenge (LPCVC) Summary
Figure 4 for 2023 Low-Power Computer Vision Challenge (LPCVC) Summary
Viaarxiv icon

DB-LLM: Accurate Dual-Binarization for Efficient LLMs

Add code
Feb 19, 2024
Figure 1 for DB-LLM: Accurate Dual-Binarization for Efficient LLMs
Figure 2 for DB-LLM: Accurate Dual-Binarization for Efficient LLMs
Figure 3 for DB-LLM: Accurate Dual-Binarization for Efficient LLMs
Figure 4 for DB-LLM: Accurate Dual-Binarization for Efficient LLMs
Viaarxiv icon

Accurate LoRA-Finetuning Quantization of LLMs via Information Retention

Add code
Feb 08, 2024
Viaarxiv icon