Picture for Tianyun Zhang

Tianyun Zhang

Load-balanced Gather-scatter Patterns for Sparse Deep Neural Networks

Add code
Dec 20, 2021
Figure 1 for Load-balanced Gather-scatter Patterns for Sparse Deep Neural Networks
Figure 2 for Load-balanced Gather-scatter Patterns for Sparse Deep Neural Networks
Figure 3 for Load-balanced Gather-scatter Patterns for Sparse Deep Neural Networks
Figure 4 for Load-balanced Gather-scatter Patterns for Sparse Deep Neural Networks
Viaarxiv icon

Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search

Add code
Aug 18, 2021
Figure 1 for Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search
Figure 2 for Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search
Figure 3 for Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search
Figure 4 for Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search
Viaarxiv icon

Efficient Transformer-based Large Scale Language Representations using Hardware-friendly Block Structured Pruning

Add code
Oct 08, 2020
Figure 1 for Efficient Transformer-based Large Scale Language Representations using Hardware-friendly Block Structured Pruning
Figure 2 for Efficient Transformer-based Large Scale Language Representations using Hardware-friendly Block Structured Pruning
Figure 3 for Efficient Transformer-based Large Scale Language Representations using Hardware-friendly Block Structured Pruning
Figure 4 for Efficient Transformer-based Large Scale Language Representations using Hardware-friendly Block Structured Pruning
Viaarxiv icon

Computation on Sparse Neural Networks: an Inspiration for Future Hardware

Add code
Apr 24, 2020
Figure 1 for Computation on Sparse Neural Networks: an Inspiration for Future Hardware
Figure 2 for Computation on Sparse Neural Networks: an Inspiration for Future Hardware
Figure 3 for Computation on Sparse Neural Networks: an Inspiration for Future Hardware
Viaarxiv icon

A Unified DNN Weight Compression Framework Using Reweighted Optimization Methods

Add code
Apr 12, 2020
Figure 1 for A Unified DNN Weight Compression Framework Using Reweighted Optimization Methods
Figure 2 for A Unified DNN Weight Compression Framework Using Reweighted Optimization Methods
Figure 3 for A Unified DNN Weight Compression Framework Using Reweighted Optimization Methods
Figure 4 for A Unified DNN Weight Compression Framework Using Reweighted Optimization Methods
Viaarxiv icon

BLK-REW: A Unified Block-based DNN Pruning Framework using Reweighted Regularization Method

Add code
Feb 22, 2020
Figure 1 for BLK-REW: A Unified Block-based DNN Pruning Framework using Reweighted Regularization Method
Figure 2 for BLK-REW: A Unified Block-based DNN Pruning Framework using Reweighted Regularization Method
Figure 3 for BLK-REW: A Unified Block-based DNN Pruning Framework using Reweighted Regularization Method
Figure 4 for BLK-REW: A Unified Block-based DNN Pruning Framework using Reweighted Regularization Method
Viaarxiv icon

An Image Enhancing Pattern-based Sparsity for Real-time Inference on Mobile Devices

Add code
Feb 22, 2020
Figure 1 for An Image Enhancing Pattern-based Sparsity for Real-time Inference on Mobile Devices
Figure 2 for An Image Enhancing Pattern-based Sparsity for Real-time Inference on Mobile Devices
Figure 3 for An Image Enhancing Pattern-based Sparsity for Real-time Inference on Mobile Devices
Figure 4 for An Image Enhancing Pattern-based Sparsity for Real-time Inference on Mobile Devices
Viaarxiv icon

An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM

Add code
Aug 29, 2019
Figure 1 for An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM
Figure 2 for An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM
Figure 3 for An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM
Figure 4 for An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM
Viaarxiv icon

Beyond Adversarial Training: Min-Max Optimization in Adversarial Attack and Defense

Add code
Jun 09, 2019
Figure 1 for Beyond Adversarial Training: Min-Max Optimization in Adversarial Attack and Defense
Figure 2 for Beyond Adversarial Training: Min-Max Optimization in Adversarial Attack and Defense
Figure 3 for Beyond Adversarial Training: Min-Max Optimization in Adversarial Attack and Defense
Figure 4 for Beyond Adversarial Training: Min-Max Optimization in Adversarial Attack and Defense
Viaarxiv icon

Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM

Add code
Mar 30, 2019
Figure 1 for Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM
Figure 2 for Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM
Figure 3 for Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM
Figure 4 for Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM
Viaarxiv icon