Alert button
Picture for Sheng Lin

Sheng Lin

Alert button

An Image Enhancing Pattern-based Sparsity for Real-time Inference on Mobile Devices

Add code
Bookmark button
Alert button
Jan 20, 2020
Xiaolong Ma, Wei Niu, Tianyun Zhang, Sijia Liu, Fu-Ming Guo, Sheng Lin, Hongjia Li, Xiang Chen, Jian Tang, Kaisheng Ma, Bin Ren, Yanzhi Wang

Figure 1 for An Image Enhancing Pattern-based Sparsity for Real-time Inference on Mobile Devices
Figure 2 for An Image Enhancing Pattern-based Sparsity for Real-time Inference on Mobile Devices
Figure 3 for An Image Enhancing Pattern-based Sparsity for Real-time Inference on Mobile Devices
Figure 4 for An Image Enhancing Pattern-based Sparsity for Real-time Inference on Mobile Devices
Viaarxiv icon

PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with Pattern-based Weight Pruning

Add code
Bookmark button
Alert button
Jan 17, 2020
Wei Niu, Xiaolong Ma, Sheng Lin, Shihao Wang, Xuehai Qian, Xue Lin, Yanzhi Wang, Bin Ren

Figure 1 for PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with Pattern-based Weight Pruning
Figure 2 for PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with Pattern-based Weight Pruning
Figure 3 for PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with Pattern-based Weight Pruning
Figure 4 for PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with Pattern-based Weight Pruning
Viaarxiv icon

A SOT-MRAM-based Processing-In-Memory Engine for Highly Compressed DNN Implementation

Add code
Bookmark button
Alert button
Nov 24, 2019
Geng Yuan, Xiaolong Ma, Sheng Lin, Zhengang Li, Caiwen Ding

Figure 1 for A SOT-MRAM-based Processing-In-Memory Engine for Highly Compressed DNN Implementation
Figure 2 for A SOT-MRAM-based Processing-In-Memory Engine for Highly Compressed DNN Implementation
Figure 3 for A SOT-MRAM-based Processing-In-Memory Engine for Highly Compressed DNN Implementation
Figure 4 for A SOT-MRAM-based Processing-In-Memory Engine for Highly Compressed DNN Implementation
Viaarxiv icon

DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks

Add code
Bookmark button
Alert button
Nov 20, 2019
Ao Ren, Tao Zhang, Yuhao Wang, Sheng Lin, Peiyan Dong, Yen-kuang Chen, Yuan Xie, Yanzhi Wang

Figure 1 for DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks
Figure 2 for DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks
Figure 3 for DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks
Figure 4 for DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks
Viaarxiv icon

Deep Compressed Pneumonia Detection for Low-Power Embedded Devices

Add code
Bookmark button
Alert button
Nov 04, 2019
Hongjia Li, Sheng Lin, Ning Liu, Caiwen Ding, Yanzhi Wang

Figure 1 for Deep Compressed Pneumonia Detection for Low-Power Embedded Devices
Figure 2 for Deep Compressed Pneumonia Detection for Low-Power Embedded Devices
Figure 3 for Deep Compressed Pneumonia Detection for Low-Power Embedded Devices
Viaarxiv icon

Learning Dynamic Context Augmentation for Global Entity Linking

Add code
Bookmark button
Alert button
Sep 04, 2019
Xiyuan Yang, Xiaotao Gu, Sheng Lin, Siliang Tang, Yueting Zhuang, Fei Wu, Zhigang Chen, Guoping Hu, Xiang Ren

Figure 1 for Learning Dynamic Context Augmentation for Global Entity Linking
Figure 2 for Learning Dynamic Context Augmentation for Global Entity Linking
Figure 3 for Learning Dynamic Context Augmentation for Global Entity Linking
Figure 4 for Learning Dynamic Context Augmentation for Global Entity Linking
Viaarxiv icon

An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM

Add code
Bookmark button
Alert button
Aug 29, 2019
Geng Yuan, Xiaolong Ma, Caiwen Ding, Sheng Lin, Tianyun Zhang, Zeinab S. Jalali, Yilong Zhao, Li Jiang, Sucheta Soundarajan, Yanzhi Wang

Figure 1 for An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM
Figure 2 for An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM
Figure 3 for An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM
Figure 4 for An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM
Viaarxiv icon