Alert button
Picture for Yongpan Liu

Yongpan Liu

Alert button

A 65nm 8b-Activation 8b-Weight SRAM-Based Charge-Domain Computing-in-Memory Macro Using A Fully-Parallel Analog Adder Network and A Single-ADC Interface

Add code
Bookmark button
Alert button
Nov 23, 2022
Guodong Yin, Mufeng Zhou, Yiming Chen, Wenjun Tang, Zekun Yang, Mingyen Lee, Xirui Du, Jinshan Yue, Jiaxin Liu, Huazhong Yang, Yongpan Liu, Xueqing Li

Figure 1 for A 65nm 8b-Activation 8b-Weight SRAM-Based Charge-Domain Computing-in-Memory Macro Using A Fully-Parallel Analog Adder Network and A Single-ADC Interface
Figure 2 for A 65nm 8b-Activation 8b-Weight SRAM-Based Charge-Domain Computing-in-Memory Macro Using A Fully-Parallel Analog Adder Network and A Single-ADC Interface
Figure 3 for A 65nm 8b-Activation 8b-Weight SRAM-Based Charge-Domain Computing-in-Memory Macro Using A Fully-Parallel Analog Adder Network and A Single-ADC Interface
Figure 4 for A 65nm 8b-Activation 8b-Weight SRAM-Based Charge-Domain Computing-in-Memory Macro Using A Fully-Parallel Analog Adder Network and A Single-ADC Interface
Viaarxiv icon

Block-Wise Dynamic-Precision Neural Network Training Acceleration via Online Quantization Sensitivity Analytics

Add code
Bookmark button
Alert button
Oct 31, 2022
Ruoyang Liu, Chenhan Wei, Yixiong Yang, Wenxun Wang, Huazhong Yang, Yongpan Liu

Figure 1 for Block-Wise Dynamic-Precision Neural Network Training Acceleration via Online Quantization Sensitivity Analytics
Figure 2 for Block-Wise Dynamic-Precision Neural Network Training Acceleration via Online Quantization Sensitivity Analytics
Figure 3 for Block-Wise Dynamic-Precision Neural Network Training Acceleration via Online Quantization Sensitivity Analytics
Figure 4 for Block-Wise Dynamic-Precision Neural Network Training Acceleration via Online Quantization Sensitivity Analytics
Viaarxiv icon

SEFormer: Structure Embedding Transformer for 3D Object Detection

Add code
Bookmark button
Alert button
Sep 05, 2022
Xiaoyu Feng, Heming Du, Yueqi Duan, Yongpan Liu, Hehe Fan

Figure 1 for SEFormer: Structure Embedding Transformer for 3D Object Detection
Figure 2 for SEFormer: Structure Embedding Transformer for 3D Object Detection
Figure 3 for SEFormer: Structure Embedding Transformer for 3D Object Detection
Figure 4 for SEFormer: Structure Embedding Transformer for 3D Object Detection
Viaarxiv icon

Adaptive Structured Sparse Network for Efficient CNNs with Feature Regularization

Add code
Bookmark button
Alert button
Oct 21, 2020
Chen Tang, Wenyu Sun, Zhuqing Yuan, Guijin Wang, Yongpan Liu

Figure 1 for Adaptive Structured Sparse Network for Efficient CNNs with Feature Regularization
Figure 2 for Adaptive Structured Sparse Network for Efficient CNNs with Feature Regularization
Figure 3 for Adaptive Structured Sparse Network for Efficient CNNs with Feature Regularization
Figure 4 for Adaptive Structured Sparse Network for Efficient CNNs with Feature Regularization
Viaarxiv icon

ADMP: An Adversarial Double Masks Based Pruning Framework For Unsupervised Cross-Domain Compression

Add code
Bookmark button
Alert button
Jun 07, 2020
Xiaoyu Feng, Zhuqing Yuan, Guijin Wang, Yongpan Liu

Figure 1 for ADMP: An Adversarial Double Masks Based Pruning Framework For Unsupervised Cross-Domain Compression
Figure 2 for ADMP: An Adversarial Double Masks Based Pruning Framework For Unsupervised Cross-Domain Compression
Figure 3 for ADMP: An Adversarial Double Masks Based Pruning Framework For Unsupervised Cross-Domain Compression
Figure 4 for ADMP: An Adversarial Double Masks Based Pruning Framework For Unsupervised Cross-Domain Compression
Viaarxiv icon

Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM

Add code
Bookmark button
Alert button
Mar 30, 2019
Shaokai Ye, Xiaoyu Feng, Tianyun Zhang, Xiaolong Ma, Sheng Lin, Zhengang Li, Kaidi Xu, Wujie Wen, Sijia Liu, Jian Tang, Makan Fardad, Xue Lin, Yongpan Liu, Yanzhi Wang

Figure 1 for Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM
Figure 2 for Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM
Figure 3 for Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM
Figure 4 for Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM
Viaarxiv icon