Alert button
Picture for Peter Y. K. Cheung

Peter Y. K. Cheung

Alert button

Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference

Add code
Bookmark button
Alert button
Jan 02, 2022
Erwei Wang, James J. Davis, Georgios-Ilias Stavrou, Peter Y. K. Cheung, George A. Constantinides, Mohamed S. Abdelfattah

Figure 1 for Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference
Figure 2 for Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference
Figure 3 for Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference
Figure 4 for Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference
Viaarxiv icon

Enabling Binary Neural Network Training on the Edge

Add code
Bookmark button
Alert button
Feb 10, 2021
Erwei Wang, James J. Davis, Daniele Moro, Piotr Zielinski, Claudionor Coelho, Satrajit Chatterjee, Peter Y. K. Cheung, George A. Constantinides

Figure 1 for Enabling Binary Neural Network Training on the Edge
Figure 2 for Enabling Binary Neural Network Training on the Edge
Figure 3 for Enabling Binary Neural Network Training on the Edge
Figure 4 for Enabling Binary Neural Network Training on the Edge
Viaarxiv icon

LUTNet: Learning FPGA Configurations for Highly Efficient Neural Network Inference

Add code
Bookmark button
Alert button
Oct 24, 2019
Erwei Wang, James J. Davis, Peter Y. K. Cheung, George A. Constantinides

Figure 1 for LUTNet: Learning FPGA Configurations for Highly Efficient Neural Network Inference
Figure 2 for LUTNet: Learning FPGA Configurations for Highly Efficient Neural Network Inference
Figure 3 for LUTNet: Learning FPGA Configurations for Highly Efficient Neural Network Inference
Figure 4 for LUTNet: Learning FPGA Configurations for Highly Efficient Neural Network Inference
Viaarxiv icon

Automatic Generation of Multi-precision Multi-arithmetic CNN Accelerators for FPGAs

Add code
Bookmark button
Alert button
Oct 21, 2019
Yiren Zhao, Xitong Gao, Xuan Guo, Junyi Liu, Erwei Wang, Robert Mullins, Peter Y. K. Cheung, George Constantinides, Cheng-Zhong Xu

Figure 1 for Automatic Generation of Multi-precision Multi-arithmetic CNN Accelerators for FPGAs
Figure 2 for Automatic Generation of Multi-precision Multi-arithmetic CNN Accelerators for FPGAs
Figure 3 for Automatic Generation of Multi-precision Multi-arithmetic CNN Accelerators for FPGAs
Figure 4 for Automatic Generation of Multi-precision Multi-arithmetic CNN Accelerators for FPGAs
Viaarxiv icon

LUTNet: Rethinking Inference in FPGA Soft Logic

Add code
Bookmark button
Alert button
Apr 01, 2019
Erwei Wang, James J. Davis, Peter Y. K. Cheung, George A. Constantinides

Figure 1 for LUTNet: Rethinking Inference in FPGA Soft Logic
Figure 2 for LUTNet: Rethinking Inference in FPGA Soft Logic
Figure 3 for LUTNet: Rethinking Inference in FPGA Soft Logic
Figure 4 for LUTNet: Rethinking Inference in FPGA Soft Logic
Viaarxiv icon

Deep Neural Network Approximation for Custom Hardware: Where We've Been, Where We're Going

Add code
Bookmark button
Alert button
Jan 21, 2019
Erwei Wang, James J. Davis, Ruizhe Zhao, Ho-Cheung Ng, Xinyu Niu, Wayne Luk, Peter Y. K. Cheung, George A. Constantinides

Figure 1 for Deep Neural Network Approximation for Custom Hardware: Where We've Been, Where We're Going
Figure 2 for Deep Neural Network Approximation for Custom Hardware: Where We've Been, Where We're Going
Figure 3 for Deep Neural Network Approximation for Custom Hardware: Where We've Been, Where We're Going
Figure 4 for Deep Neural Network Approximation for Custom Hardware: Where We've Been, Where We're Going
Viaarxiv icon

Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic

Add code
Bookmark button
Alert button
Jul 17, 2018
Jiang Su, Nicholas J. Fraser, Giulio Gambardella, Michaela Blott, Gianluca Durelli, David B. Thomas, Philip Leong, Peter Y. K. Cheung

Figure 1 for Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic
Figure 2 for Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic
Figure 3 for Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic
Figure 4 for Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic
Viaarxiv icon