Picture for Fangmin Chen

Fangmin Chen

FoldGPT: Simple and Effective Large Language Model Compression Scheme

Add code
Jul 01, 2024
Viaarxiv icon

SparseByteNN: A Novel Mobile Inference Acceleration Framework Based on Fine-Grained Group Sparsity

Add code
Oct 30, 2023
Figure 1 for SparseByteNN: A Novel Mobile Inference Acceleration Framework Based on Fine-Grained Group Sparsity
Figure 2 for SparseByteNN: A Novel Mobile Inference Acceleration Framework Based on Fine-Grained Group Sparsity
Figure 3 for SparseByteNN: A Novel Mobile Inference Acceleration Framework Based on Fine-Grained Group Sparsity
Figure 4 for SparseByteNN: A Novel Mobile Inference Acceleration Framework Based on Fine-Grained Group Sparsity
Viaarxiv icon

Unfolding Once is Enough: A Deployment-Friendly Transformer Unit for Super-Resolution

Add code
Aug 05, 2023
Figure 1 for Unfolding Once is Enough: A Deployment-Friendly Transformer Unit for Super-Resolution
Figure 2 for Unfolding Once is Enough: A Deployment-Friendly Transformer Unit for Super-Resolution
Figure 3 for Unfolding Once is Enough: A Deployment-Friendly Transformer Unit for Super-Resolution
Figure 4 for Unfolding Once is Enough: A Deployment-Friendly Transformer Unit for Super-Resolution
Viaarxiv icon

Residual Local Feature Network for Efficient Super-Resolution

Add code
May 16, 2022
Figure 1 for Residual Local Feature Network for Efficient Super-Resolution
Figure 2 for Residual Local Feature Network for Efficient Super-Resolution
Figure 3 for Residual Local Feature Network for Efficient Super-Resolution
Figure 4 for Residual Local Feature Network for Efficient Super-Resolution
Viaarxiv icon