Picture for Zhengang Li

Zhengang Li

BLK-REW: A Unified Block-based DNN Pruning Framework using Reweighted Regularization Method

Add code
Feb 22, 2020
Figure 1 for BLK-REW: A Unified Block-based DNN Pruning Framework using Reweighted Regularization Method
Figure 2 for BLK-REW: A Unified Block-based DNN Pruning Framework using Reweighted Regularization Method
Figure 3 for BLK-REW: A Unified Block-based DNN Pruning Framework using Reweighted Regularization Method
Figure 4 for BLK-REW: A Unified Block-based DNN Pruning Framework using Reweighted Regularization Method
Viaarxiv icon

RTMobile: Beyond Real-Time Mobile Acceleration of RNNs for Speech Recognition

Add code
Feb 19, 2020
Figure 1 for RTMobile: Beyond Real-Time Mobile Acceleration of RNNs for Speech Recognition
Figure 2 for RTMobile: Beyond Real-Time Mobile Acceleration of RNNs for Speech Recognition
Figure 3 for RTMobile: Beyond Real-Time Mobile Acceleration of RNNs for Speech Recognition
Figure 4 for RTMobile: Beyond Real-Time Mobile Acceleration of RNNs for Speech Recognition
Viaarxiv icon

SS-Auto: A Single-Shot, Automatic Structured Weight Pruning Framework of DNNs with Ultra-High Efficiency

Add code
Jan 23, 2020
Figure 1 for SS-Auto: A Single-Shot, Automatic Structured Weight Pruning Framework of DNNs with Ultra-High Efficiency
Figure 2 for SS-Auto: A Single-Shot, Automatic Structured Weight Pruning Framework of DNNs with Ultra-High Efficiency
Figure 3 for SS-Auto: A Single-Shot, Automatic Structured Weight Pruning Framework of DNNs with Ultra-High Efficiency
Figure 4 for SS-Auto: A Single-Shot, Automatic Structured Weight Pruning Framework of DNNs with Ultra-High Efficiency
Viaarxiv icon

A SOT-MRAM-based Processing-In-Memory Engine for Highly Compressed DNN Implementation

Add code
Nov 24, 2019
Figure 1 for A SOT-MRAM-based Processing-In-Memory Engine for Highly Compressed DNN Implementation
Figure 2 for A SOT-MRAM-based Processing-In-Memory Engine for Highly Compressed DNN Implementation
Figure 3 for A SOT-MRAM-based Processing-In-Memory Engine for Highly Compressed DNN Implementation
Figure 4 for A SOT-MRAM-based Processing-In-Memory Engine for Highly Compressed DNN Implementation
Viaarxiv icon

Non-structured DNN Weight Pruning Considered Harmful

Add code
Jul 03, 2019
Figure 1 for Non-structured DNN Weight Pruning Considered Harmful
Figure 2 for Non-structured DNN Weight Pruning Considered Harmful
Figure 3 for Non-structured DNN Weight Pruning Considered Harmful
Figure 4 for Non-structured DNN Weight Pruning Considered Harmful
Viaarxiv icon

ResNet Can Be Pruned 60x: Introducing Network Purification and Unused Path Removal after Weight Pruning

Add code
Apr 30, 2019
Figure 1 for ResNet Can Be Pruned 60x: Introducing Network Purification and Unused Path Removal  after Weight Pruning
Figure 2 for ResNet Can Be Pruned 60x: Introducing Network Purification and Unused Path Removal  after Weight Pruning
Figure 3 for ResNet Can Be Pruned 60x: Introducing Network Purification and Unused Path Removal  after Weight Pruning
Figure 4 for ResNet Can Be Pruned 60x: Introducing Network Purification and Unused Path Removal  after Weight Pruning
Viaarxiv icon

Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM

Add code
Mar 30, 2019
Figure 1 for Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM
Figure 2 for Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM
Figure 3 for Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM
Figure 4 for Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM
Viaarxiv icon