Alert button
Picture for Yanzhi Wang

Yanzhi Wang

Alert button

AirNN: Neural Networks with Over-the-Air Convolution via Reconfigurable Intelligent Surfaces

Add code
Bookmark button
Alert button
Feb 07, 2022
Sara Garcia Sanchez, Guillem Reus Muns, Carlos Bocanegra, Yanyu Li, Ufuk Muncuk, Yousof Naderi, Yanzhi Wang, Stratis Ioannidis, Kaushik R. Chowdhury

Figure 1 for AirNN: Neural Networks with Over-the-Air Convolution via Reconfigurable Intelligent Surfaces
Figure 2 for AirNN: Neural Networks with Over-the-Air Convolution via Reconfigurable Intelligent Surfaces
Figure 3 for AirNN: Neural Networks with Over-the-Air Convolution via Reconfigurable Intelligent Surfaces
Figure 4 for AirNN: Neural Networks with Over-the-Air Convolution via Reconfigurable Intelligent Surfaces
Viaarxiv icon

VAQF: Fully Automatic Software-hardware Co-design Framework for Low-bit Vision Transformer

Add code
Bookmark button
Alert button
Jan 17, 2022
Mengshu Sun, Haoyu Ma, Guoliang Kang, Yifan Jiang, Tianlong Chen, Xiaolong Ma, Zhangyang Wang, Yanzhi Wang

Figure 1 for VAQF: Fully Automatic Software-hardware Co-design Framework for Low-bit Vision Transformer
Figure 2 for VAQF: Fully Automatic Software-hardware Co-design Framework for Low-bit Vision Transformer
Figure 3 for VAQF: Fully Automatic Software-hardware Co-design Framework for Low-bit Vision Transformer
Figure 4 for VAQF: Fully Automatic Software-hardware Co-design Framework for Low-bit Vision Transformer
Viaarxiv icon

SPViT: Enabling Faster Vision Transformers via Soft Token Pruning

Add code
Bookmark button
Alert button
Dec 27, 2021
Zhenglun Kong, Peiyan Dong, Xiaolong Ma, Xin Meng, Wei Niu, Mengshu Sun, Bin Ren, Minghai Qin, Hao Tang, Yanzhi Wang

Figure 1 for SPViT: Enabling Faster Vision Transformers via Soft Token Pruning
Figure 2 for SPViT: Enabling Faster Vision Transformers via Soft Token Pruning
Figure 3 for SPViT: Enabling Faster Vision Transformers via Soft Token Pruning
Figure 4 for SPViT: Enabling Faster Vision Transformers via Soft Token Pruning
Viaarxiv icon

Compact Multi-level Sparse Neural Networks with Input Independent Dynamic Rerouting

Add code
Bookmark button
Alert button
Dec 21, 2021
Minghai Qin, Tianyun Zhang, Fei Sun, Yen-Kuang Chen, Makan Fardad, Yanzhi Wang, Yuan Xie

Figure 1 for Compact Multi-level Sparse Neural Networks with Input Independent Dynamic Rerouting
Figure 2 for Compact Multi-level Sparse Neural Networks with Input Independent Dynamic Rerouting
Figure 3 for Compact Multi-level Sparse Neural Networks with Input Independent Dynamic Rerouting
Figure 4 for Compact Multi-level Sparse Neural Networks with Input Independent Dynamic Rerouting
Viaarxiv icon

Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration

Add code
Bookmark button
Alert button
Nov 22, 2021
Yifan Gong, Geng Yuan, Zheng Zhan, Wei Niu, Zhengang Li, Pu Zhao, Yuxuan Cai, Sijia Liu, Bin Ren, Xue Lin, Xulong Tang, Yanzhi Wang

Figure 1 for Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Figure 2 for Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Figure 3 for Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Figure 4 for Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Viaarxiv icon

ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers

Add code
Bookmark button
Alert button
Nov 04, 2021
Husheng Han, Kaidi Xu, Xing Hu, Xiaobing Chen, Ling Liang, Zidong Du, Qi Guo, Yanzhi Wang, Yunji Chen

Figure 1 for ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers
Figure 2 for ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers
Figure 3 for ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers
Figure 4 for ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers
Viaarxiv icon

ILMPQ : An Intra-Layer Multi-Precision Deep Neural Network Quantization framework for FPGA

Add code
Bookmark button
Alert button
Oct 30, 2021
Sung-En Chang, Yanyu Li, Mengshu Sun, Yanzhi Wang, Xue Lin

Figure 1 for ILMPQ : An Intra-Layer Multi-Precision Deep Neural Network Quantization framework for FPGA
Figure 2 for ILMPQ : An Intra-Layer Multi-Precision Deep Neural Network Quantization framework for FPGA
Viaarxiv icon

RMSMP: A Novel Deep Neural Network Quantization Framework with Row-wise Mixed Schemes and Multiple Precisions

Add code
Bookmark button
Alert button
Oct 30, 2021
Sung-En Chang, Yanyu Li, Mengshu Sun, Weiwen Jiang, Sijia Liu, Yanzhi Wang, Xue Lin

Figure 1 for RMSMP: A Novel Deep Neural Network Quantization Framework with Row-wise Mixed Schemes and Multiple Precisions
Figure 2 for RMSMP: A Novel Deep Neural Network Quantization Framework with Row-wise Mixed Schemes and Multiple Precisions
Figure 3 for RMSMP: A Novel Deep Neural Network Quantization Framework with Row-wise Mixed Schemes and Multiple Precisions
Figure 4 for RMSMP: A Novel Deep Neural Network Quantization Framework with Row-wise Mixed Schemes and Multiple Precisions
Viaarxiv icon

MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge

Add code
Bookmark button
Alert button
Oct 26, 2021
Geng Yuan, Xiaolong Ma, Wei Niu, Zhengang Li, Zhenglun Kong, Ning Liu, Yifan Gong, Zheng Zhan, Chaoyang He, Qing Jin, Siyue Wang, Minghai Qin, Bin Ren, Yanzhi Wang, Sijia Liu, Xue Lin

Figure 1 for MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Figure 2 for MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Figure 3 for MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Figure 4 for MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Viaarxiv icon