Picture for Fangmin Chen

Fangmin Chen

TAP: A Token-Adaptive Predictor Framework for Training-Free Diffusion Acceleration

Add code
Mar 04, 2026
Viaarxiv icon

S2O: Early Stopping for Sparse Attention via Online Permutation

Add code
Feb 26, 2026
Viaarxiv icon

Train Short, Inference Long: Training-free Horizon Extension for Autoregressive Video Generation

Add code
Feb 17, 2026
Viaarxiv icon

NextFlow: Unified Sequential Modeling Activates Multimodal Understanding and Generation

Add code
Jan 05, 2026
Viaarxiv icon

GQSA: Group Quantization and Sparsity for Accelerating Large Language Model Inference

Add code
Dec 23, 2024
Figure 1 for GQSA: Group Quantization and Sparsity for Accelerating Large Language Model Inference
Figure 2 for GQSA: Group Quantization and Sparsity for Accelerating Large Language Model Inference
Figure 3 for GQSA: Group Quantization and Sparsity for Accelerating Large Language Model Inference
Figure 4 for GQSA: Group Quantization and Sparsity for Accelerating Large Language Model Inference
Viaarxiv icon

ABQ-LLM: Arbitrary-Bit Quantized Inference Acceleration for Large Language Models

Add code
Aug 16, 2024
Figure 1 for ABQ-LLM: Arbitrary-Bit Quantized Inference Acceleration for Large Language Models
Figure 2 for ABQ-LLM: Arbitrary-Bit Quantized Inference Acceleration for Large Language Models
Figure 3 for ABQ-LLM: Arbitrary-Bit Quantized Inference Acceleration for Large Language Models
Figure 4 for ABQ-LLM: Arbitrary-Bit Quantized Inference Acceleration for Large Language Models
Viaarxiv icon

FoldGPT: Simple and Effective Large Language Model Compression Scheme

Add code
Jul 01, 2024
Viaarxiv icon

SparseByteNN: A Novel Mobile Inference Acceleration Framework Based on Fine-Grained Group Sparsity

Add code
Oct 30, 2023
Figure 1 for SparseByteNN: A Novel Mobile Inference Acceleration Framework Based on Fine-Grained Group Sparsity
Figure 2 for SparseByteNN: A Novel Mobile Inference Acceleration Framework Based on Fine-Grained Group Sparsity
Figure 3 for SparseByteNN: A Novel Mobile Inference Acceleration Framework Based on Fine-Grained Group Sparsity
Figure 4 for SparseByteNN: A Novel Mobile Inference Acceleration Framework Based on Fine-Grained Group Sparsity
Viaarxiv icon

Unfolding Once is Enough: A Deployment-Friendly Transformer Unit for Super-Resolution

Add code
Aug 05, 2023
Figure 1 for Unfolding Once is Enough: A Deployment-Friendly Transformer Unit for Super-Resolution
Figure 2 for Unfolding Once is Enough: A Deployment-Friendly Transformer Unit for Super-Resolution
Figure 3 for Unfolding Once is Enough: A Deployment-Friendly Transformer Unit for Super-Resolution
Figure 4 for Unfolding Once is Enough: A Deployment-Friendly Transformer Unit for Super-Resolution
Viaarxiv icon

Residual Local Feature Network for Efficient Super-Resolution

Add code
May 16, 2022
Figure 1 for Residual Local Feature Network for Efficient Super-Resolution
Figure 2 for Residual Local Feature Network for Efficient Super-Resolution
Figure 3 for Residual Local Feature Network for Efficient Super-Resolution
Figure 4 for Residual Local Feature Network for Efficient Super-Resolution
Viaarxiv icon