Picture for Lingchuan Meng

Lingchuan Meng

Restructurable Activation Networks

Add code
Aug 17, 2022
Figure 1 for Restructurable Activation Networks
Figure 2 for Restructurable Activation Networks
Figure 3 for Restructurable Activation Networks
Figure 4 for Restructurable Activation Networks
Viaarxiv icon

Armour: Generalizable Compact Self-Attention for Vision Transformers

Add code
Aug 03, 2021
Figure 1 for Armour: Generalizable Compact Self-Attention for Vision Transformers
Figure 2 for Armour: Generalizable Compact Self-Attention for Vision Transformers
Figure 3 for Armour: Generalizable Compact Self-Attention for Vision Transformers
Figure 4 for Armour: Generalizable Compact Self-Attention for Vision Transformers
Viaarxiv icon

Collapsible Linear Blocks for Super-Efficient Super Resolution

Add code
Mar 17, 2021
Figure 1 for Collapsible Linear Blocks for Super-Efficient Super Resolution
Figure 2 for Collapsible Linear Blocks for Super-Efficient Super Resolution
Figure 3 for Collapsible Linear Blocks for Super-Efficient Super Resolution
Figure 4 for Collapsible Linear Blocks for Super-Efficient Super Resolution
Viaarxiv icon

Efficient Winograd Convolution via Integer Arithmetic

Add code
Jan 07, 2019
Figure 1 for Efficient Winograd Convolution via Integer Arithmetic
Figure 2 for Efficient Winograd Convolution via Integer Arithmetic
Figure 3 for Efficient Winograd Convolution via Integer Arithmetic
Figure 4 for Efficient Winograd Convolution via Integer Arithmetic
Viaarxiv icon