Picture for Zheng Zhan

Zheng Zhan

Efficient Training with Denoised Neural Weights

Add code
Jul 16, 2024
Viaarxiv icon

DiffClass: Diffusion-Based Class Incremental Learning

Add code
Mar 08, 2024
Figure 1 for DiffClass: Diffusion-Based Class Incremental Learning
Figure 2 for DiffClass: Diffusion-Based Class Incremental Learning
Figure 3 for DiffClass: Diffusion-Based Class Incremental Learning
Figure 4 for DiffClass: Diffusion-Based Class Incremental Learning
Viaarxiv icon

E$^{2}$GAN: Efficient Training of Efficient GANs for Image-to-Image Translation

Add code
Jan 11, 2024
Viaarxiv icon

DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning

Add code
Apr 30, 2023
Figure 1 for DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning
Figure 2 for DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning
Figure 3 for DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning
Figure 4 for DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning
Viaarxiv icon

All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management

Add code
Dec 09, 2022
Figure 1 for All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management
Figure 2 for All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management
Figure 3 for All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management
Figure 4 for All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management
Viaarxiv icon

SparCL: Sparse Continual Learning on the Edge

Add code
Sep 20, 2022
Figure 1 for SparCL: Sparse Continual Learning on the Edge
Figure 2 for SparCL: Sparse Continual Learning on the Edge
Figure 3 for SparCL: Sparse Continual Learning on the Edge
Figure 4 for SparCL: Sparse Continual Learning on the Edge
Viaarxiv icon

Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution

Add code
Jul 25, 2022
Figure 1 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Figure 2 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Figure 3 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Figure 4 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Viaarxiv icon

Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration

Add code
Nov 22, 2021
Figure 1 for Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Figure 2 for Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Figure 3 for Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Figure 4 for Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Viaarxiv icon

MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge

Add code
Oct 26, 2021
Figure 1 for MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Figure 2 for MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Figure 3 for MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Figure 4 for MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Viaarxiv icon

Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search

Add code
Aug 18, 2021
Figure 1 for Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search
Figure 2 for Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search
Figure 3 for Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search
Figure 4 for Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search
Viaarxiv icon