Picture for Yanzhi Wang

Yanzhi Wang

F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization

Add code
Feb 10, 2022
Figure 1 for F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization
Figure 2 for F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization
Figure 3 for F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization
Figure 4 for F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization
Viaarxiv icon

Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets

Add code
Feb 09, 2022
Figure 1 for Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets
Figure 2 for Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets
Figure 3 for Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets
Figure 4 for Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets
Viaarxiv icon

AirNN: Neural Networks with Over-the-Air Convolution via Reconfigurable Intelligent Surfaces

Add code
Feb 07, 2022
Figure 1 for AirNN: Neural Networks with Over-the-Air Convolution via Reconfigurable Intelligent Surfaces
Figure 2 for AirNN: Neural Networks with Over-the-Air Convolution via Reconfigurable Intelligent Surfaces
Figure 3 for AirNN: Neural Networks with Over-the-Air Convolution via Reconfigurable Intelligent Surfaces
Figure 4 for AirNN: Neural Networks with Over-the-Air Convolution via Reconfigurable Intelligent Surfaces
Viaarxiv icon

VAQF: Fully Automatic Software-hardware Co-design Framework for Low-bit Vision Transformer

Add code
Jan 17, 2022
Figure 1 for VAQF: Fully Automatic Software-hardware Co-design Framework for Low-bit Vision Transformer
Figure 2 for VAQF: Fully Automatic Software-hardware Co-design Framework for Low-bit Vision Transformer
Figure 3 for VAQF: Fully Automatic Software-hardware Co-design Framework for Low-bit Vision Transformer
Figure 4 for VAQF: Fully Automatic Software-hardware Co-design Framework for Low-bit Vision Transformer
Viaarxiv icon

SPViT: Enabling Faster Vision Transformers via Soft Token Pruning

Add code
Dec 27, 2021
Figure 1 for SPViT: Enabling Faster Vision Transformers via Soft Token Pruning
Figure 2 for SPViT: Enabling Faster Vision Transformers via Soft Token Pruning
Figure 3 for SPViT: Enabling Faster Vision Transformers via Soft Token Pruning
Figure 4 for SPViT: Enabling Faster Vision Transformers via Soft Token Pruning
Viaarxiv icon

Compact Multi-level Sparse Neural Networks with Input Independent Dynamic Rerouting

Add code
Dec 21, 2021
Figure 1 for Compact Multi-level Sparse Neural Networks with Input Independent Dynamic Rerouting
Figure 2 for Compact Multi-level Sparse Neural Networks with Input Independent Dynamic Rerouting
Figure 3 for Compact Multi-level Sparse Neural Networks with Input Independent Dynamic Rerouting
Figure 4 for Compact Multi-level Sparse Neural Networks with Input Independent Dynamic Rerouting
Viaarxiv icon

Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration

Add code
Nov 22, 2021
Figure 1 for Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Figure 2 for Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Figure 3 for Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Figure 4 for Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Viaarxiv icon

ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers

Add code
Nov 04, 2021
Figure 1 for ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers
Figure 2 for ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers
Figure 3 for ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers
Figure 4 for ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers
Viaarxiv icon

ILMPQ : An Intra-Layer Multi-Precision Deep Neural Network Quantization framework for FPGA

Add code
Oct 30, 2021
Figure 1 for ILMPQ : An Intra-Layer Multi-Precision Deep Neural Network Quantization framework for FPGA
Figure 2 for ILMPQ : An Intra-Layer Multi-Precision Deep Neural Network Quantization framework for FPGA
Viaarxiv icon

RMSMP: A Novel Deep Neural Network Quantization Framework with Row-wise Mixed Schemes and Multiple Precisions

Add code
Oct 30, 2021
Figure 1 for RMSMP: A Novel Deep Neural Network Quantization Framework with Row-wise Mixed Schemes and Multiple Precisions
Figure 2 for RMSMP: A Novel Deep Neural Network Quantization Framework with Row-wise Mixed Schemes and Multiple Precisions
Figure 3 for RMSMP: A Novel Deep Neural Network Quantization Framework with Row-wise Mixed Schemes and Multiple Precisions
Figure 4 for RMSMP: A Novel Deep Neural Network Quantization Framework with Row-wise Mixed Schemes and Multiple Precisions
Viaarxiv icon