Alert button
Picture for Shang Wu

Shang Wu

Alert button

Celine

DRSI-Net: Dual-Residual Spatial Interaction Network for Multi-Person Pose Estimation

Add code
Bookmark button
Alert button
Feb 26, 2024
Shang Wu, Bin Wang

Viaarxiv icon

NetDistiller: Empowering Tiny Deep Learning via In-Situ Distillation

Add code
Bookmark button
Alert button
Oct 24, 2023
Shunyao Zhang, Yonggan Fu, Shang Wu, Jyotikrishna Dass, Haoran You, Yingyan, Lin

Viaarxiv icon

NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations

Add code
Bookmark button
Alert button
Jun 10, 2023
Yonggan Fu, Ye Yuan, Souvik Kundu, Shang Wu, Shunyao Zhang, Yingyan Lin

Figure 1 for NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations
Figure 2 for NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations
Figure 3 for NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations
Figure 4 for NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations
Viaarxiv icon

Instant-NeRF: Instant On-Device Neural Radiance Field Training via Algorithm-Accelerator Co-Designed Near-Memory Processing

Add code
Bookmark button
Alert button
May 09, 2023
Yang Zhao, Shang Wu, Jingqun Zhang, Sixu Li, Chaojian Li, Yingyan Lin

Figure 1 for Instant-NeRF: Instant On-Device Neural Radiance Field Training via Algorithm-Accelerator Co-Designed Near-Memory Processing
Figure 2 for Instant-NeRF: Instant On-Device Neural Radiance Field Training via Algorithm-Accelerator Co-Designed Near-Memory Processing
Figure 3 for Instant-NeRF: Instant On-Device Neural Radiance Field Training via Algorithm-Accelerator Co-Designed Near-Memory Processing
Figure 4 for Instant-NeRF: Instant On-Device Neural Radiance Field Training via Algorithm-Accelerator Co-Designed Near-Memory Processing
Viaarxiv icon

Hint-Aug: Drawing Hints from Foundation Vision Transformers Towards Boosted Few-Shot Parameter-Efficient Tuning

Add code
Bookmark button
Alert button
Apr 26, 2023
Zhongzhi Yu, Shang Wu, Yonggan Fu, Shunyao Zhang, Yingyan Lin

Figure 1 for Hint-Aug: Drawing Hints from Foundation Vision Transformers Towards Boosted Few-Shot Parameter-Efficient Tuning
Figure 2 for Hint-Aug: Drawing Hints from Foundation Vision Transformers Towards Boosted Few-Shot Parameter-Efficient Tuning
Figure 3 for Hint-Aug: Drawing Hints from Foundation Vision Transformers Towards Boosted Few-Shot Parameter-Efficient Tuning
Figure 4 for Hint-Aug: Drawing Hints from Foundation Vision Transformers Towards Boosted Few-Shot Parameter-Efficient Tuning
Viaarxiv icon

Robust Tickets Can Transfer Better: Drawing More Transferable Subnetworks in Transfer Learning

Add code
Bookmark button
Alert button
Apr 24, 2023
Yonggan Fu, Ye Yuan, Shang Wu, Jiayi Yuan, Yingyan Lin

Figure 1 for Robust Tickets Can Transfer Better: Drawing More Transferable Subnetworks in Transfer Learning
Figure 2 for Robust Tickets Can Transfer Better: Drawing More Transferable Subnetworks in Transfer Learning
Figure 3 for Robust Tickets Can Transfer Better: Drawing More Transferable Subnetworks in Transfer Learning
Figure 4 for Robust Tickets Can Transfer Better: Drawing More Transferable Subnetworks in Transfer Learning
Viaarxiv icon

ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention

Add code
Bookmark button
Alert button
Nov 09, 2022
Jyotikrishna Dass, Shang Wu, Huihong Shi, Chaojian Li, Zhifan Ye, Zhongfeng Wang, Yingyan Lin

Figure 1 for ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention
Figure 2 for ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention
Figure 3 for ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention
Figure 4 for ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention
Viaarxiv icon

Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?

Add code
Bookmark button
Alert button
Apr 05, 2022
Yonggan Fu, Shunyao Zhang, Shang Wu, Cheng Wan, Yingyan Lin

Figure 1 for Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?
Figure 2 for Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?
Figure 3 for Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?
Figure 4 for Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?
Viaarxiv icon

LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference

Add code
Bookmark button
Alert button
Mar 15, 2022
Zhongzhi Yu, Yonggan Fu, Shang Wu, Mengquan Li, Haoran You, Yingyan Lin

Figure 1 for LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference
Figure 2 for LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference
Figure 3 for LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference
Figure 4 for LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference
Viaarxiv icon

Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks

Add code
Bookmark button
Alert button
Nov 06, 2021
Yonggan Fu, Qixuan Yu, Yang Zhang, Shang Wu, Xu Ouyang, David Cox, Yingyan Lin

Figure 1 for Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks
Figure 2 for Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks
Figure 3 for Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks
Figure 4 for Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks
Viaarxiv icon