Picture for Yuzhang Shang

Yuzhang Shang

Distilling Long-tailed Datasets

Add code
Aug 24, 2024
Figure 1 for Distilling Long-tailed Datasets
Figure 2 for Distilling Long-tailed Datasets
Figure 3 for Distilling Long-tailed Datasets
Figure 4 for Distilling Long-tailed Datasets
Viaarxiv icon

Dataset Quantization with Active Learning based Adaptive Sampling

Add code
Jul 09, 2024
Viaarxiv icon

A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training

Add code
May 27, 2024
Viaarxiv icon

PTQ4DiT: Post-training Quantization for Diffusion Transformers

Add code
May 25, 2024
Viaarxiv icon

Efficient Multitask Dense Predictor via Binarization

Add code
May 23, 2024
Viaarxiv icon

LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models

Add code
Apr 01, 2024
Viaarxiv icon

FBPT: A Fully Binary Point Transformer

Add code
Mar 15, 2024
Figure 1 for FBPT: A Fully Binary Point Transformer
Figure 2 for FBPT: A Fully Binary Point Transformer
Figure 3 for FBPT: A Fully Binary Point Transformer
Figure 4 for FBPT: A Fully Binary Point Transformer
Viaarxiv icon

LLM Inference Unveiled: Survey and Roofline Model Insights

Add code
Mar 11, 2024
Viaarxiv icon

Online Multi-spectral Neuron Tracing

Add code
Mar 10, 2024
Viaarxiv icon

QuEST: Low-bit Diffusion Model Quantization via Efficient Selective Finetuning

Add code
Feb 13, 2024
Viaarxiv icon