Alert button
Picture for Huihong Shi

Huihong Shi

Alert button

Celine

An FPGA-Based Reconfigurable Accelerator for Convolution-Transformer Hybrid EfficientViT

Add code
Bookmark button
Alert button
Mar 29, 2024
Haikuo Shao, Huihong Shi, Wendong Mao, Zhongfeng Wang

Figure 1 for An FPGA-Based Reconfigurable Accelerator for Convolution-Transformer Hybrid EfficientViT
Figure 2 for An FPGA-Based Reconfigurable Accelerator for Convolution-Transformer Hybrid EfficientViT
Figure 3 for An FPGA-Based Reconfigurable Accelerator for Convolution-Transformer Hybrid EfficientViT
Figure 4 for An FPGA-Based Reconfigurable Accelerator for Convolution-Transformer Hybrid EfficientViT
Viaarxiv icon

A Computationally Efficient Neural Video Compression Accelerator Based on a Sparse CNN-Transformer Hybrid Network

Add code
Bookmark button
Alert button
Dec 19, 2023
Siyu Zhang, Wendong Mao, Huihong Shi, Zhongfeng Wang

Viaarxiv icon

S2R: Exploring a Double-Win Transformer-Based Framework for Ideal and Blind Super-Resolution

Add code
Bookmark button
Alert button
Aug 16, 2023
Minghao She, Wendong Mao, Huihong Shi, Zhongfeng Wang

Figure 1 for S2R: Exploring a Double-Win Transformer-Based Framework for Ideal and Blind Super-Resolution
Figure 2 for S2R: Exploring a Double-Win Transformer-Based Framework for Ideal and Blind Super-Resolution
Figure 3 for S2R: Exploring a Double-Win Transformer-Based Framework for Ideal and Blind Super-Resolution
Figure 4 for S2R: Exploring a Double-Win Transformer-Based Framework for Ideal and Blind Super-Resolution
Viaarxiv icon

ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer

Add code
Bookmark button
Alert button
Jun 10, 2023
Haoran You, Huihong Shi, Yipin Guo, Yingyan, Lin

Figure 1 for ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer
Figure 2 for ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer
Figure 3 for ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer
Figure 4 for ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer
Viaarxiv icon

ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention

Add code
Bookmark button
Alert button
Nov 09, 2022
Jyotikrishna Dass, Shang Wu, Huihong Shi, Chaojian Li, Zhifan Ye, Zhongfeng Wang, Yingyan Lin

Figure 1 for ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention
Figure 2 for ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention
Figure 3 for ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention
Figure 4 for ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention
Viaarxiv icon

NASA: Neural Architecture Search and Acceleration for Hardware Inspired Hybrid Networks

Add code
Bookmark button
Alert button
Oct 24, 2022
Huihong Shi, Haoran You, Yang Zhao, Zhongfeng Wang, Yingyan Lin

Figure 1 for NASA: Neural Architecture Search and Acceleration for Hardware Inspired Hybrid Networks
Figure 2 for NASA: Neural Architecture Search and Acceleration for Hardware Inspired Hybrid Networks
Figure 3 for NASA: Neural Architecture Search and Acceleration for Hardware Inspired Hybrid Networks
Figure 4 for NASA: Neural Architecture Search and Acceleration for Hardware Inspired Hybrid Networks
Viaarxiv icon

ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design

Add code
Bookmark button
Alert button
Oct 18, 2022
Haoran You, Zhanyi Sun, Huihong Shi, Zhongzhi Yu, Yang Zhao, Yongan Zhang, Chaojian Li, Baopu Li, Yingyan Lin

Figure 1 for ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 2 for ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 3 for ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 4 for ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
Viaarxiv icon

ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks

Add code
Bookmark button
Alert button
May 17, 2022
Haoran You, Baopu Li, Huihong Shi, Yonggan Fu, Yingyan Lin

Figure 1 for ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
Figure 2 for ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
Figure 3 for ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
Figure 4 for ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
Viaarxiv icon