Picture for Zheng Zhan

Zheng Zhan

DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning

Add code
Apr 30, 2023
Figure 1 for DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning
Figure 2 for DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning
Figure 3 for DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning
Figure 4 for DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning
Viaarxiv icon

All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management

Add code
Dec 09, 2022
Viaarxiv icon

SparCL: Sparse Continual Learning on the Edge

Add code
Sep 20, 2022
Figure 1 for SparCL: Sparse Continual Learning on the Edge
Figure 2 for SparCL: Sparse Continual Learning on the Edge
Figure 3 for SparCL: Sparse Continual Learning on the Edge
Figure 4 for SparCL: Sparse Continual Learning on the Edge
Viaarxiv icon

Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution

Add code
Jul 25, 2022
Figure 1 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Figure 2 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Figure 3 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Figure 4 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Viaarxiv icon

Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration

Add code
Nov 22, 2021
Figure 1 for Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Figure 2 for Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Figure 3 for Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Figure 4 for Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Viaarxiv icon

MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge

Add code
Oct 26, 2021
Figure 1 for MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Figure 2 for MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Figure 3 for MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Figure 4 for MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Viaarxiv icon

Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search

Add code
Aug 18, 2021
Figure 1 for Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search
Figure 2 for Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search
Figure 3 for Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search
Figure 4 for Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search
Viaarxiv icon

6.7ms on Mobile with over 78% ImageNet Accuracy: Unified Network Pruning and Architecture Search for Beyond Real-Time Mobile Acceleration

Add code
Dec 01, 2020
Figure 1 for 6.7ms on Mobile with over 78% ImageNet Accuracy: Unified Network Pruning and Architecture Search for Beyond Real-Time Mobile Acceleration
Figure 2 for 6.7ms on Mobile with over 78% ImageNet Accuracy: Unified Network Pruning and Architecture Search for Beyond Real-Time Mobile Acceleration
Figure 3 for 6.7ms on Mobile with over 78% ImageNet Accuracy: Unified Network Pruning and Architecture Search for Beyond Real-Time Mobile Acceleration
Figure 4 for 6.7ms on Mobile with over 78% ImageNet Accuracy: Unified Network Pruning and Architecture Search for Beyond Real-Time Mobile Acceleration
Viaarxiv icon

Towards Real-Time DNN Inference on Mobile Platforms with Model Pruning and Compiler Optimization

Add code
Apr 22, 2020
Figure 1 for Towards Real-Time DNN Inference on Mobile Platforms with Model Pruning and Compiler Optimization
Figure 2 for Towards Real-Time DNN Inference on Mobile Platforms with Model Pruning and Compiler Optimization
Viaarxiv icon

A Unified DNN Weight Compression Framework Using Reweighted Optimization Methods

Add code
Apr 12, 2020
Figure 1 for A Unified DNN Weight Compression Framework Using Reweighted Optimization Methods
Figure 2 for A Unified DNN Weight Compression Framework Using Reweighted Optimization Methods
Figure 3 for A Unified DNN Weight Compression Framework Using Reweighted Optimization Methods
Figure 4 for A Unified DNN Weight Compression Framework Using Reweighted Optimization Methods
Viaarxiv icon