Alert button
Picture for Giulio Gambardella

Giulio Gambardella

Alert button

FAT: Training Neural Networks for Reliable Inference Under Hardware Faults

Add code
Bookmark button
Alert button
Nov 11, 2020
Ussama Zahid, Giulio Gambardella, Nicholas J. Fraser, Michaela Blott, Kees Vissers

Figure 1 for FAT: Training Neural Networks for Reliable Inference Under Hardware Faults
Figure 2 for FAT: Training Neural Networks for Reliable Inference Under Hardware Faults
Figure 3 for FAT: Training Neural Networks for Reliable Inference Under Hardware Faults
Figure 4 for FAT: Training Neural Networks for Reliable Inference Under Hardware Faults
Viaarxiv icon

Efficient Error-Tolerant Quantized Neural Network Accelerators

Add code
Bookmark button
Alert button
Dec 16, 2019
Giulio Gambardella, Johannes Kappauf, Michaela Blott, Christoph Doehring, Martin Kumm, Peter Zipf, Kees Vissers

Figure 1 for Efficient Error-Tolerant Quantized Neural Network Accelerators
Figure 2 for Efficient Error-Tolerant Quantized Neural Network Accelerators
Figure 3 for Efficient Error-Tolerant Quantized Neural Network Accelerators
Figure 4 for Efficient Error-Tolerant Quantized Neural Network Accelerators
Viaarxiv icon

Synetgy: Algorithm-hardware Co-design for ConvNet Accelerators on Embedded FPGAs

Add code
Bookmark button
Alert button
Nov 21, 2018
Yifan Yang, Qijing Huang, Bichen Wu, Tianjun Zhang, Liang Ma, Giulio Gambardella, Michaela Blott, Luciano Lavagno, Kees Vissers, John Wawrzynek, Kurt Keutzer

Figure 1 for Synetgy: Algorithm-hardware Co-design for ConvNet Accelerators on Embedded FPGAs
Figure 2 for Synetgy: Algorithm-hardware Co-design for ConvNet Accelerators on Embedded FPGAs
Figure 3 for Synetgy: Algorithm-hardware Co-design for ConvNet Accelerators on Embedded FPGAs
Figure 4 for Synetgy: Algorithm-hardware Co-design for ConvNet Accelerators on Embedded FPGAs
Viaarxiv icon

Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic

Add code
Bookmark button
Alert button
Jul 17, 2018
Jiang Su, Nicholas J. Fraser, Giulio Gambardella, Michaela Blott, Gianluca Durelli, David B. Thomas, Philip Leong, Peter Y. K. Cheung

Figure 1 for Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic
Figure 2 for Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic
Figure 3 for Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic
Figure 4 for Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic
Viaarxiv icon

FINN-L: Library Extensions and Design Trade-off Analysis for Variable Precision LSTM Networks on FPGAs

Add code
Bookmark button
Alert button
Jul 11, 2018
Vladimir Rybalkin, Alessandro Pappalardo, Muhammad Mohsin Ghaffar, Giulio Gambardella, Norbert Wehn, Michaela Blott

Figure 1 for FINN-L: Library Extensions and Design Trade-off Analysis for Variable Precision LSTM Networks on FPGAs
Figure 2 for FINN-L: Library Extensions and Design Trade-off Analysis for Variable Precision LSTM Networks on FPGAs
Figure 3 for FINN-L: Library Extensions and Design Trade-off Analysis for Variable Precision LSTM Networks on FPGAs
Figure 4 for FINN-L: Library Extensions and Design Trade-off Analysis for Variable Precision LSTM Networks on FPGAs
Viaarxiv icon

Scaling Neural Network Performance through Customized Hardware Architectures on Reconfigurable Logic

Add code
Bookmark button
Alert button
Jun 26, 2018
Michaela Blott, Thomas B. Preusser, Nicholas Fraser, Giulio Gambardella, Kenneth OBrien, Yaman Umuroglu, Miriam Leeser

Figure 1 for Scaling Neural Network Performance through Customized Hardware Architectures on Reconfigurable Logic
Figure 2 for Scaling Neural Network Performance through Customized Hardware Architectures on Reconfigurable Logic
Figure 3 for Scaling Neural Network Performance through Customized Hardware Architectures on Reconfigurable Logic
Figure 4 for Scaling Neural Network Performance through Customized Hardware Architectures on Reconfigurable Logic
Viaarxiv icon

Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices

Add code
Bookmark button
Alert button
Jun 21, 2018
Thomas B. Preußer, Giulio Gambardella, Nicholas Fraser, Michaela Blott

Figure 1 for Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices
Figure 2 for Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices
Figure 3 for Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices
Figure 4 for Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices
Viaarxiv icon

Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks

Add code
Bookmark button
Alert button
Oct 10, 2017
Julian Faraone, Nicholas Fraser, Giulio Gambardella, Michaela Blott, Philip H. W. Leong

Figure 1 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Figure 2 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Figure 3 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Figure 4 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Viaarxiv icon

Scaling Binarized Neural Networks on Reconfigurable Logic

Add code
Bookmark button
Alert button
Jan 27, 2017
Nicholas J. Fraser, Yaman Umuroglu, Giulio Gambardella, Michaela Blott, Philip Leong, Magnus Jahre, Kees Vissers

Figure 1 for Scaling Binarized Neural Networks on Reconfigurable Logic
Figure 2 for Scaling Binarized Neural Networks on Reconfigurable Logic
Figure 3 for Scaling Binarized Neural Networks on Reconfigurable Logic
Figure 4 for Scaling Binarized Neural Networks on Reconfigurable Logic
Viaarxiv icon

FINN: A Framework for Fast, Scalable Binarized Neural Network Inference

Add code
Bookmark button
Alert button
Dec 01, 2016
Yaman Umuroglu, Nicholas J. Fraser, Giulio Gambardella, Michaela Blott, Philip Leong, Magnus Jahre, Kees Vissers

Figure 1 for FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
Figure 2 for FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
Figure 3 for FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
Figure 4 for FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
Viaarxiv icon