Alert button
Picture for Mengzhao Chen

Mengzhao Chen

Alert button

Adapting LLaMA Decoder to Vision Transformer

Add code
Bookmark button
Alert button
Apr 13, 2024
Jiahao Wang, Wenqi Shao, Mengzhao Chen, Chengyue Wu, Yong Liu, Kaipeng Zhang, Songyang Zhang, Kai Chen, Ping Luo

Viaarxiv icon

BESA: Pruning Large Language Models with Blockwise Parameter-Efficient Sparsity Allocation

Add code
Bookmark button
Alert button
Feb 18, 2024
Peng Xu, Wenqi Shao, Mengzhao Chen, Shitao Tang, Kaipeng Zhang, Peng Gao, Fengwei An, Yu Qiao, Ping Luo

Viaarxiv icon

I&S-ViT: An Inclusive & Stable Method for Pushing the Limit of Post-Training ViTs Quantization

Add code
Bookmark button
Alert button
Nov 16, 2023
Yunshan Zhong, Jiawei Hu, Mingbao Lin, Mengzhao Chen, Rongrong Ji

Viaarxiv icon

OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models

Add code
Bookmark button
Alert button
Aug 25, 2023
Wenqi Shao, Mengzhao Chen, Zhaoyang Zhang, Peng Xu, Lirui Zhao, Zhiqian Li, Kaipeng Zhang, Peng Gao, Yu Qiao, Ping Luo

Figure 1 for OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models
Figure 2 for OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models
Figure 3 for OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models
Figure 4 for OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models
Viaarxiv icon

Spatial Re-parameterization for N:M Sparsity

Add code
Bookmark button
Alert button
Jun 09, 2023
Yuxin Zhang, Mingbao Lin, Yunshan Zhong, Mengzhao Chen, Fei Chao, Rongrong Ji

Figure 1 for Spatial Re-parameterization for N:M Sparsity
Figure 2 for Spatial Re-parameterization for N:M Sparsity
Figure 3 for Spatial Re-parameterization for N:M Sparsity
Figure 4 for Spatial Re-parameterization for N:M Sparsity
Viaarxiv icon

DiffRate : Differentiable Compression Rate for Efficient Vision Transformers

Add code
Bookmark button
Alert button
May 29, 2023
Mengzhao Chen, Wenqi Shao, Peng Xu, Mingbao Lin, Kaipeng Zhang, Fei Chao, Rongrong Ji, Yu Qiao, Ping Luo

Figure 1 for DiffRate : Differentiable Compression Rate for Efficient Vision Transformers
Figure 2 for DiffRate : Differentiable Compression Rate for Efficient Vision Transformers
Figure 3 for DiffRate : Differentiable Compression Rate for Efficient Vision Transformers
Figure 4 for DiffRate : Differentiable Compression Rate for Efficient Vision Transformers
Viaarxiv icon

MultiQuant: A Novel Multi-Branch Topology Method for Arbitrary Bit-width Network Quantization

Add code
Bookmark button
Alert button
May 14, 2023
Yunshan Zhong, Mingbao Lin, Yuyao Zhou, Mengzhao Chen, Yuxin Zhang, Fei Chao, Rongrong Ji

Figure 1 for MultiQuant: A Novel Multi-Branch Topology Method for Arbitrary Bit-width Network Quantization
Figure 2 for MultiQuant: A Novel Multi-Branch Topology Method for Arbitrary Bit-width Network Quantization
Figure 3 for MultiQuant: A Novel Multi-Branch Topology Method for Arbitrary Bit-width Network Quantization
Figure 4 for MultiQuant: A Novel Multi-Branch Topology Method for Arbitrary Bit-width Network Quantization
Viaarxiv icon

SMMix: Self-Motivated Image Mixing for Vision Transformers

Add code
Bookmark button
Alert button
Dec 26, 2022
Mengzhao Chen, Mingbao Lin, ZhiHang Lin, Yuxin Zhang, Fei Chao, Rongrong Ji

Figure 1 for SMMix: Self-Motivated Image Mixing for Vision Transformers
Figure 2 for SMMix: Self-Motivated Image Mixing for Vision Transformers
Figure 3 for SMMix: Self-Motivated Image Mixing for Vision Transformers
Figure 4 for SMMix: Self-Motivated Image Mixing for Vision Transformers
Viaarxiv icon

Super Vision Transformer

Add code
Bookmark button
Alert button
May 26, 2022
Mingbao Lin, Mengzhao Chen, Yuxin Zhang, Ke Li, Yunhang Shen, Chunhua Shen, Rongrong Ji

Figure 1 for Super Vision Transformer
Figure 2 for Super Vision Transformer
Figure 3 for Super Vision Transformer
Figure 4 for Super Vision Transformer
Viaarxiv icon