Alert button
Picture for Nicholas J. Fraser

Nicholas J. Fraser

Alert button

Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference

Add code
Bookmark button
Alert button
Feb 22, 2021
Benjamin Hawks, Javier Duarte, Nicholas J. Fraser, Alessandro Pappalardo, Nhan Tran, Yaman Umuroglu

Figure 1 for Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference
Figure 2 for Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference
Figure 3 for Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference
Figure 4 for Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference
Viaarxiv icon

FAT: Training Neural Networks for Reliable Inference Under Hardware Faults

Add code
Bookmark button
Alert button
Nov 11, 2020
Ussama Zahid, Giulio Gambardella, Nicholas J. Fraser, Michaela Blott, Kees Vissers

Figure 1 for FAT: Training Neural Networks for Reliable Inference Under Hardware Faults
Figure 2 for FAT: Training Neural Networks for Reliable Inference Under Hardware Faults
Figure 3 for FAT: Training Neural Networks for Reliable Inference Under Hardware Faults
Figure 4 for FAT: Training Neural Networks for Reliable Inference Under Hardware Faults
Viaarxiv icon

LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications

Add code
Bookmark button
Alert button
Apr 06, 2020
Yaman Umuroglu, Yash Akhauri, Nicholas J. Fraser, Michaela Blott

Figure 1 for LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications
Figure 2 for LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications
Figure 3 for LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications
Figure 4 for LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications
Viaarxiv icon

Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic

Add code
Bookmark button
Alert button
Jul 17, 2018
Jiang Su, Nicholas J. Fraser, Giulio Gambardella, Michaela Blott, Gianluca Durelli, David B. Thomas, Philip Leong, Peter Y. K. Cheung

Figure 1 for Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic
Figure 2 for Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic
Figure 3 for Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic
Figure 4 for Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic
Viaarxiv icon

Scaling Binarized Neural Networks on Reconfigurable Logic

Add code
Bookmark button
Alert button
Jan 27, 2017
Nicholas J. Fraser, Yaman Umuroglu, Giulio Gambardella, Michaela Blott, Philip Leong, Magnus Jahre, Kees Vissers

Figure 1 for Scaling Binarized Neural Networks on Reconfigurable Logic
Figure 2 for Scaling Binarized Neural Networks on Reconfigurable Logic
Figure 3 for Scaling Binarized Neural Networks on Reconfigurable Logic
Figure 4 for Scaling Binarized Neural Networks on Reconfigurable Logic
Viaarxiv icon

FINN: A Framework for Fast, Scalable Binarized Neural Network Inference

Add code
Bookmark button
Alert button
Dec 01, 2016
Yaman Umuroglu, Nicholas J. Fraser, Giulio Gambardella, Michaela Blott, Philip Leong, Magnus Jahre, Kees Vissers

Figure 1 for FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
Figure 2 for FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
Figure 3 for FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
Figure 4 for FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
Viaarxiv icon