Alert button
Picture for Zheng Zhang

Zheng Zhang

Alert button

Prototype-supervised Adversarial Network for Targeted Attack of Deep Hashing

Add code
Bookmark button
Alert button
May 17, 2021
Xunguang Wang, Zheng Zhang, Baoyuan Wu, Fumin Shen, Guangming Lu

Figure 1 for Prototype-supervised Adversarial Network for Targeted Attack of Deep Hashing
Figure 2 for Prototype-supervised Adversarial Network for Targeted Attack of Deep Hashing
Figure 3 for Prototype-supervised Adversarial Network for Targeted Attack of Deep Hashing
Figure 4 for Prototype-supervised Adversarial Network for Targeted Attack of Deep Hashing
Viaarxiv icon

Breaking Shortcut: Exploring Fully Convolutional Cycle-Consistency for Video Correspondence Learning

Add code
Bookmark button
Alert button
May 12, 2021
Yansong Tang, Zhenyu Jiang, Zhenda Xie, Yue Cao, Zheng Zhang, Philip H. S. Torr, Han Hu

Figure 1 for Breaking Shortcut: Exploring Fully Convolutional Cycle-Consistency for Video Correspondence Learning
Figure 2 for Breaking Shortcut: Exploring Fully Convolutional Cycle-Consistency for Video Correspondence Learning
Figure 3 for Breaking Shortcut: Exploring Fully Convolutional Cycle-Consistency for Video Correspondence Learning
Figure 4 for Breaking Shortcut: Exploring Fully Convolutional Cycle-Consistency for Video Correspondence Learning
Viaarxiv icon

Self-Supervised Learning with Swin Transformers

Add code
Bookmark button
Alert button
May 11, 2021
Zhenda Xie, Yutong Lin, Zhuliang Yao, Zheng Zhang, Qi Dai, Yue Cao, Han Hu

Figure 1 for Self-Supervised Learning with Swin Transformers
Figure 2 for Self-Supervised Learning with Swin Transformers
Figure 3 for Self-Supervised Learning with Swin Transformers
Figure 4 for Self-Supervised Learning with Swin Transformers
Viaarxiv icon

3U-EdgeAI: Ultra-Low Memory Training, Ultra-Low BitwidthQuantization, and Ultra-Low Latency Acceleration

Add code
Bookmark button
Alert button
May 11, 2021
Yao Chen, Cole Hawkins, Kaiqi Zhang, Zheng Zhang, Cong Hao

Figure 1 for 3U-EdgeAI: Ultra-Low Memory Training, Ultra-Low BitwidthQuantization, and Ultra-Low Latency Acceleration
Figure 2 for 3U-EdgeAI: Ultra-Low Memory Training, Ultra-Low BitwidthQuantization, and Ultra-Low Latency Acceleration
Figure 3 for 3U-EdgeAI: Ultra-Low Memory Training, Ultra-Low BitwidthQuantization, and Ultra-Low Latency Acceleration
Figure 4 for 3U-EdgeAI: Ultra-Low Memory Training, Ultra-Low BitwidthQuantization, and Ultra-Low Latency Acceleration
Viaarxiv icon

Group-Free 3D Object Detection via Transformers

Add code
Bookmark button
Alert button
Apr 23, 2021
Ze Liu, Zheng Zhang, Yue Cao, Han Hu, Xin Tong

Figure 1 for Group-Free 3D Object Detection via Transformers
Figure 2 for Group-Free 3D Object Detection via Transformers
Figure 3 for Group-Free 3D Object Detection via Transformers
Figure 4 for Group-Free 3D Object Detection via Transformers
Viaarxiv icon

Meta-tuning Language Models to Answer Prompts Better

Add code
Bookmark button
Alert button
Apr 17, 2021
Ruiqi Zhong, Kristy Lee, Zheng Zhang, Dan Klein

Figure 1 for Meta-tuning Language Models to Answer Prompts Better
Figure 2 for Meta-tuning Language Models to Answer Prompts Better
Figure 3 for Meta-tuning Language Models to Answer Prompts Better
Figure 4 for Meta-tuning Language Models to Answer Prompts Better
Viaarxiv icon

High-Dimensional Uncertainty Quantification via Rank- and Sample-Adaptive Tensor Regression

Add code
Bookmark button
Alert button
Mar 31, 2021
Zichang He, Zheng Zhang

Figure 1 for High-Dimensional Uncertainty Quantification via Rank- and Sample-Adaptive Tensor Regression
Figure 2 for High-Dimensional Uncertainty Quantification via Rank- and Sample-Adaptive Tensor Regression
Figure 3 for High-Dimensional Uncertainty Quantification via Rank- and Sample-Adaptive Tensor Regression
Figure 4 for High-Dimensional Uncertainty Quantification via Rank- and Sample-Adaptive Tensor Regression
Viaarxiv icon

Swin Transformer: Hierarchical Vision Transformer using Shifted Windows

Add code
Bookmark button
Alert button
Mar 25, 2021
Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo

Figure 1 for Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Figure 2 for Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Figure 3 for Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Figure 4 for Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Viaarxiv icon