Alert button
Picture for Zheng Zhan

Zheng Zhan

Alert button

E$^{2}$GAN: Efficient Training of Efficient GANs for Image-to-Image Translation

Jan 11, 2024
Yifan Gong, Zheng Zhan, Qing Jin, Yanyu Li, Yerlan Idelbayev, Xian Liu, Andrey Zharkov, Kfir Aberman, Sergey Tulyakov, Yanzhi Wang, Jian Ren

Viaarxiv icon

DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning

Apr 30, 2023
Zifeng Wang, Zheng Zhan, Yifan Gong, Yucai Shao, Stratis Ioannidis, Yanzhi Wang, Jennifer Dy

Figure 1 for DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning
Figure 2 for DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning
Figure 3 for DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning
Figure 4 for DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning
Viaarxiv icon

All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management

Dec 09, 2022
Yifan Gong, Zheng Zhan, Pu Zhao, Yushu Wu, Chao Wu, Caiwen Ding, Weiwen Jiang, Minghai Qin, Yanzhi Wang

Figure 1 for All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management
Figure 2 for All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management
Figure 3 for All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management
Figure 4 for All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management
Viaarxiv icon

SparCL: Sparse Continual Learning on the Edge

Sep 20, 2022
Zifeng Wang, Zheng Zhan, Yifan Gong, Geng Yuan, Wei Niu, Tong Jian, Bin Ren, Stratis Ioannidis, Yanzhi Wang, Jennifer Dy

Figure 1 for SparCL: Sparse Continual Learning on the Edge
Figure 2 for SparCL: Sparse Continual Learning on the Edge
Figure 3 for SparCL: Sparse Continual Learning on the Edge
Figure 4 for SparCL: Sparse Continual Learning on the Edge
Viaarxiv icon

Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution

Jul 25, 2022
Yushu Wu, Yifan Gong, Pu Zhao, Yanyu Li, Zheng Zhan, Wei Niu, Hao Tang, Minghai Qin, Bin Ren, Yanzhi Wang

Figure 1 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Figure 2 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Figure 3 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Figure 4 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Viaarxiv icon

Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration

Nov 22, 2021
Yifan Gong, Geng Yuan, Zheng Zhan, Wei Niu, Zhengang Li, Pu Zhao, Yuxuan Cai, Sijia Liu, Bin Ren, Xue Lin, Xulong Tang, Yanzhi Wang

Figure 1 for Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Figure 2 for Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Figure 3 for Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Figure 4 for Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Viaarxiv icon

MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge

Oct 26, 2021
Geng Yuan, Xiaolong Ma, Wei Niu, Zhengang Li, Zhenglun Kong, Ning Liu, Yifan Gong, Zheng Zhan, Chaoyang He, Qing Jin, Siyue Wang, Minghai Qin, Bin Ren, Yanzhi Wang, Sijia Liu, Xue Lin

Figure 1 for MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Figure 2 for MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Figure 3 for MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Figure 4 for MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Viaarxiv icon

Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search

Aug 18, 2021
Zheng Zhan, Yifan Gong, Pu Zhao, Geng Yuan, Wei Niu, Yushu Wu, Tianyun Zhang, Malith Jayaweera, David Kaeli, Bin Ren, Xue Lin, Yanzhi Wang

Figure 1 for Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search
Figure 2 for Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search
Figure 3 for Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search
Figure 4 for Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search
Viaarxiv icon

6.7ms on Mobile with over 78% ImageNet Accuracy: Unified Network Pruning and Architecture Search for Beyond Real-Time Mobile Acceleration

Dec 01, 2020
Zhengang Li, Geng Yuan, Wei Niu, Yanyu Li, Pu Zhao, Yuxuan Cai, Xuan Shen, Zheng Zhan, Zhenglun Kong, Qing Jin, Zhiyu Chen, Sijia Liu, Kaiyuan Yang, Bin Ren, Yanzhi Wang, Xue Lin

Figure 1 for 6.7ms on Mobile with over 78% ImageNet Accuracy: Unified Network Pruning and Architecture Search for Beyond Real-Time Mobile Acceleration
Figure 2 for 6.7ms on Mobile with over 78% ImageNet Accuracy: Unified Network Pruning and Architecture Search for Beyond Real-Time Mobile Acceleration
Figure 3 for 6.7ms on Mobile with over 78% ImageNet Accuracy: Unified Network Pruning and Architecture Search for Beyond Real-Time Mobile Acceleration
Figure 4 for 6.7ms on Mobile with over 78% ImageNet Accuracy: Unified Network Pruning and Architecture Search for Beyond Real-Time Mobile Acceleration
Viaarxiv icon

Towards Real-Time DNN Inference on Mobile Platforms with Model Pruning and Compiler Optimization

Apr 22, 2020
Wei Niu, Pu Zhao, Zheng Zhan, Xue Lin, Yanzhi Wang, Bin Ren

Figure 1 for Towards Real-Time DNN Inference on Mobile Platforms with Model Pruning and Compiler Optimization
Figure 2 for Towards Real-Time DNN Inference on Mobile Platforms with Model Pruning and Compiler Optimization
Viaarxiv icon