Alert button
Picture for Jianbin Fang

Jianbin Fang

Alert button

Optimizing Streaming Parallelism on Heterogeneous Many-Core Architectures: A Machine Learning Based Approach

Add code
Bookmark button
Alert button
Mar 05, 2020
Peng Zhang, Jianbin Fang, Canqun Yang, Chun Huang, Tao Tang, Zheng Wang

Figure 1 for Optimizing Streaming Parallelism on Heterogeneous Many-Core Architectures: A Machine Learning Based Approach
Figure 2 for Optimizing Streaming Parallelism on Heterogeneous Many-Core Architectures: A Machine Learning Based Approach
Figure 3 for Optimizing Streaming Parallelism on Heterogeneous Many-Core Architectures: A Machine Learning Based Approach
Figure 4 for Optimizing Streaming Parallelism on Heterogeneous Many-Core Architectures: A Machine Learning Based Approach
Viaarxiv icon

Characterizing Scalability of Sparse Matrix-Vector Multiplications on Phytium FT-2000+ Many-cores

Add code
Bookmark button
Alert button
Nov 20, 2019
Donglin Chen, Jianbin Fang, Chuanfu Xu, Shizhao Chen, Zheng Wang

Figure 1 for Characterizing Scalability of Sparse Matrix-Vector Multiplications on Phytium FT-2000+ Many-cores
Figure 2 for Characterizing Scalability of Sparse Matrix-Vector Multiplications on Phytium FT-2000+ Many-cores
Figure 3 for Characterizing Scalability of Sparse Matrix-Vector Multiplications on Phytium FT-2000+ Many-cores
Figure 4 for Characterizing Scalability of Sparse Matrix-Vector Multiplications on Phytium FT-2000+ Many-cores
Viaarxiv icon

To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference

Add code
Bookmark button
Alert button
Oct 21, 2018
Qing Qin, Jie Ren, Jialong Yu, Ling Gao, Hai Wang, Jie Zheng, Yansong Feng, Jianbin Fang, Zheng Wang

Figure 1 for To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference
Figure 2 for To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference
Figure 3 for To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference
Figure 4 for To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference
Viaarxiv icon