Picture for Yanzhi Wang

Yanzhi Wang

Effective Model Sparsification by Scheduled Grow-and-Prune Methods

Add code
Jun 18, 2021
Figure 1 for Effective Model Sparsification by Scheduled Grow-and-Prune Methods
Figure 2 for Effective Model Sparsification by Scheduled Grow-and-Prune Methods
Figure 3 for Effective Model Sparsification by Scheduled Grow-and-Prune Methods
Figure 4 for Effective Model Sparsification by Scheduled Grow-and-Prune Methods
Viaarxiv icon

FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator

Add code
Jun 16, 2021
Figure 1 for FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator
Figure 2 for FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator
Figure 3 for FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator
Figure 4 for FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator
Viaarxiv icon

Efficient Micro-Structured Weight Unification and Pruning for Neural Network Compression

Add code
Jun 16, 2021
Figure 1 for Efficient Micro-Structured Weight Unification and Pruning for Neural Network Compression
Figure 2 for Efficient Micro-Structured Weight Unification and Pruning for Neural Network Compression
Figure 3 for Efficient Micro-Structured Weight Unification and Pruning for Neural Network Compression
Figure 4 for Efficient Micro-Structured Weight Unification and Pruning for Neural Network Compression
Viaarxiv icon

Towards Fast and Accurate Multi-Person Pose Estimation on Mobile Devices

Add code
Jun 06, 2021
Figure 1 for Towards Fast and Accurate Multi-Person Pose Estimation on Mobile Devices
Figure 2 for Towards Fast and Accurate Multi-Person Pose Estimation on Mobile Devices
Figure 3 for Towards Fast and Accurate Multi-Person Pose Estimation on Mobile Devices
Figure 4 for Towards Fast and Accurate Multi-Person Pose Estimation on Mobile Devices
Viaarxiv icon

A Compression-Compilation Framework for On-mobile Real-time BERT Applications

Add code
Jun 06, 2021
Figure 1 for A Compression-Compilation Framework for On-mobile Real-time BERT Applications
Figure 2 for A Compression-Compilation Framework for On-mobile Real-time BERT Applications
Figure 3 for A Compression-Compilation Framework for On-mobile Real-time BERT Applications
Figure 4 for A Compression-Compilation Framework for On-mobile Real-time BERT Applications
Viaarxiv icon

Teachers Do More Than Teach: Compressing Image-to-Image Models

Add code
Mar 05, 2021
Figure 1 for Teachers Do More Than Teach: Compressing Image-to-Image Models
Figure 2 for Teachers Do More Than Teach: Compressing Image-to-Image Models
Figure 3 for Teachers Do More Than Teach: Compressing Image-to-Image Models
Figure 4 for Teachers Do More Than Teach: Compressing Image-to-Image Models
Viaarxiv icon

Lottery Ticket Implies Accuracy Degradation, Is It a Desirable Phenomenon?

Add code
Feb 19, 2021
Figure 1 for Lottery Ticket Implies Accuracy Degradation, Is It a Desirable Phenomenon?
Figure 2 for Lottery Ticket Implies Accuracy Degradation, Is It a Desirable Phenomenon?
Figure 3 for Lottery Ticket Implies Accuracy Degradation, Is It a Desirable Phenomenon?
Figure 4 for Lottery Ticket Implies Accuracy Degradation, Is It a Desirable Phenomenon?
Viaarxiv icon

Achieving Real-Time LiDAR 3D Object Detection on a Mobile Device

Add code
Dec 26, 2020
Figure 1 for Achieving Real-Time LiDAR 3D Object Detection on a Mobile Device
Figure 2 for Achieving Real-Time LiDAR 3D Object Detection on a Mobile Device
Figure 3 for Achieving Real-Time LiDAR 3D Object Detection on a Mobile Device
Figure 4 for Achieving Real-Time LiDAR 3D Object Detection on a Mobile Device
Viaarxiv icon

Learn-Prune-Share for Lifelong Learning

Add code
Dec 13, 2020
Figure 1 for Learn-Prune-Share for Lifelong Learning
Figure 2 for Learn-Prune-Share for Lifelong Learning
Figure 3 for Learn-Prune-Share for Lifelong Learning
Figure 4 for Learn-Prune-Share for Lifelong Learning
Viaarxiv icon

Mix and Match: A Novel FPGA-Centric Deep Neural Network Quantization Framework

Add code
Dec 12, 2020
Figure 1 for Mix and Match: A Novel FPGA-Centric Deep Neural Network Quantization Framework
Figure 2 for Mix and Match: A Novel FPGA-Centric Deep Neural Network Quantization Framework
Figure 3 for Mix and Match: A Novel FPGA-Centric Deep Neural Network Quantization Framework
Figure 4 for Mix and Match: A Novel FPGA-Centric Deep Neural Network Quantization Framework
Viaarxiv icon