Alert button
Picture for Zheng-Ning Liu

Zheng-Ning Liu

Alert button

Visual Attention Network

Add code
Bookmark button
Alert button
Mar 08, 2022
Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu

Figure 1 for Visual Attention Network
Figure 2 for Visual Attention Network
Figure 3 for Visual Attention Network
Figure 4 for Visual Attention Network
Viaarxiv icon

Attention Mechanisms in Computer Vision: A Survey

Add code
Bookmark button
Alert button
Nov 15, 2021
Meng-Hao Guo, Tian-Xing Xu, Jiang-Jiang Liu, Zheng-Ning Liu, Peng-Tao Jiang, Tai-Jiang Mu, Song-Hai Zhang, Ralph R. Martin, Ming-Ming Cheng, Shi-Min Hu

Figure 1 for Attention Mechanisms in Computer Vision: A Survey
Figure 2 for Attention Mechanisms in Computer Vision: A Survey
Figure 3 for Attention Mechanisms in Computer Vision: A Survey
Figure 4 for Attention Mechanisms in Computer Vision: A Survey
Viaarxiv icon

Subdivision-Based Mesh Convolution Networks

Add code
Bookmark button
Alert button
Jun 04, 2021
Shi-Min Hu, Zheng-Ning Liu, Meng-Hao Guo, Jun-Xiong Cai, Jiahui Huang, Tai-Jiang Mu, Ralph R. Martin

Figure 1 for Subdivision-Based Mesh Convolution Networks
Figure 2 for Subdivision-Based Mesh Convolution Networks
Figure 3 for Subdivision-Based Mesh Convolution Networks
Figure 4 for Subdivision-Based Mesh Convolution Networks
Viaarxiv icon

Can Attention Enable MLPs To Catch Up With CNNs?

Add code
Bookmark button
Alert button
May 31, 2021
Meng-Hao Guo, Zheng-Ning Liu, Tai-Jiang Mu, Dun Liang, Ralph R. Martin, Shi-Min Hu

Figure 1 for Can Attention Enable MLPs To Catch Up With CNNs?
Viaarxiv icon

Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks

Add code
Bookmark button
Alert button
May 31, 2021
Meng-Hao Guo, Zheng-Ning Liu, Tai-Jiang Mu, Shi-Min Hu

Figure 1 for Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks
Figure 2 for Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks
Figure 3 for Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks
Figure 4 for Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks
Viaarxiv icon

PCT: Point Cloud Transformer

Add code
Bookmark button
Alert button
Dec 17, 2020
Meng-Hao Guo, Jun-Xiong Cai, Zheng-Ning Liu, Tai-Jiang Mu, Ralph R. Martin, Shi-Min Hu

Figure 1 for PCT: Point Cloud Transformer
Figure 2 for PCT: Point Cloud Transformer
Figure 3 for PCT: Point Cloud Transformer
Figure 4 for PCT: Point Cloud Transformer
Viaarxiv icon