Alert button
Picture for Erwei Wang

Erwei Wang

Alert button

Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference

Add code
Bookmark button
Alert button
Jan 02, 2022
Erwei Wang, James J. Davis, Georgios-Ilias Stavrou, Peter Y. K. Cheung, George A. Constantinides, Mohamed S. Abdelfattah

Figure 1 for Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference
Figure 2 for Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference
Figure 3 for Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference
Figure 4 for Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference
Viaarxiv icon

Accelerating Recurrent Neural Networks for Gravitational Wave Experiments

Add code
Bookmark button
Alert button
Jun 26, 2021
Zhiqiang Que, Erwei Wang, Umar Marikar, Eric Moreno, Jennifer Ngadiuba, Hamza Javed, Bartłomiej Borzyszkowski, Thea Aarrestad, Vladimir Loncar, Sioni Summers, Maurizio Pierini, Peter Y Cheung, Wayne Luk

Figure 1 for Accelerating Recurrent Neural Networks for Gravitational Wave Experiments
Figure 2 for Accelerating Recurrent Neural Networks for Gravitational Wave Experiments
Figure 3 for Accelerating Recurrent Neural Networks for Gravitational Wave Experiments
Figure 4 for Accelerating Recurrent Neural Networks for Gravitational Wave Experiments
Viaarxiv icon

Enabling Binary Neural Network Training on the Edge

Add code
Bookmark button
Alert button
Feb 10, 2021
Erwei Wang, James J. Davis, Daniele Moro, Piotr Zielinski, Claudionor Coelho, Satrajit Chatterjee, Peter Y. K. Cheung, George A. Constantinides

Figure 1 for Enabling Binary Neural Network Training on the Edge
Figure 2 for Enabling Binary Neural Network Training on the Edge
Figure 3 for Enabling Binary Neural Network Training on the Edge
Figure 4 for Enabling Binary Neural Network Training on the Edge
Viaarxiv icon

LUTNet: Learning FPGA Configurations for Highly Efficient Neural Network Inference

Add code
Bookmark button
Alert button
Oct 24, 2019
Erwei Wang, James J. Davis, Peter Y. K. Cheung, George A. Constantinides

Figure 1 for LUTNet: Learning FPGA Configurations for Highly Efficient Neural Network Inference
Figure 2 for LUTNet: Learning FPGA Configurations for Highly Efficient Neural Network Inference
Figure 3 for LUTNet: Learning FPGA Configurations for Highly Efficient Neural Network Inference
Figure 4 for LUTNet: Learning FPGA Configurations for Highly Efficient Neural Network Inference
Viaarxiv icon

Automatic Generation of Multi-precision Multi-arithmetic CNN Accelerators for FPGAs

Add code
Bookmark button
Alert button
Oct 21, 2019
Yiren Zhao, Xitong Gao, Xuan Guo, Junyi Liu, Erwei Wang, Robert Mullins, Peter Y. K. Cheung, George Constantinides, Cheng-Zhong Xu

Figure 1 for Automatic Generation of Multi-precision Multi-arithmetic CNN Accelerators for FPGAs
Figure 2 for Automatic Generation of Multi-precision Multi-arithmetic CNN Accelerators for FPGAs
Figure 3 for Automatic Generation of Multi-precision Multi-arithmetic CNN Accelerators for FPGAs
Figure 4 for Automatic Generation of Multi-precision Multi-arithmetic CNN Accelerators for FPGAs
Viaarxiv icon

LUTNet: Rethinking Inference in FPGA Soft Logic

Add code
Bookmark button
Alert button
Apr 01, 2019
Erwei Wang, James J. Davis, Peter Y. K. Cheung, George A. Constantinides

Figure 1 for LUTNet: Rethinking Inference in FPGA Soft Logic
Figure 2 for LUTNet: Rethinking Inference in FPGA Soft Logic
Figure 3 for LUTNet: Rethinking Inference in FPGA Soft Logic
Figure 4 for LUTNet: Rethinking Inference in FPGA Soft Logic
Viaarxiv icon

Deep Neural Network Approximation for Custom Hardware: Where We've Been, Where We're Going

Add code
Bookmark button
Alert button
Jan 21, 2019
Erwei Wang, James J. Davis, Ruizhe Zhao, Ho-Cheung Ng, Xinyu Niu, Wayne Luk, Peter Y. K. Cheung, George A. Constantinides

Figure 1 for Deep Neural Network Approximation for Custom Hardware: Where We've Been, Where We're Going
Figure 2 for Deep Neural Network Approximation for Custom Hardware: Where We've Been, Where We're Going
Figure 3 for Deep Neural Network Approximation for Custom Hardware: Where We've Been, Where We're Going
Figure 4 for Deep Neural Network Approximation for Custom Hardware: Where We've Been, Where We're Going
Viaarxiv icon