Alert button
Picture for Michaela Blott

Michaela Blott

Alert button

FAT: Training Neural Networks for Reliable Inference Under Hardware Faults

Nov 11, 2020
Ussama Zahid, Giulio Gambardella, Nicholas J. Fraser, Michaela Blott, Kees Vissers

Figure 1 for FAT: Training Neural Networks for Reliable Inference Under Hardware Faults
Figure 2 for FAT: Training Neural Networks for Reliable Inference Under Hardware Faults
Figure 3 for FAT: Training Neural Networks for Reliable Inference Under Hardware Faults
Figure 4 for FAT: Training Neural Networks for Reliable Inference Under Hardware Faults
Viaarxiv icon

LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications

Apr 06, 2020
Yaman Umuroglu, Yash Akhauri, Nicholas J. Fraser, Michaela Blott

Figure 1 for LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications
Figure 2 for LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications
Figure 3 for LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications
Figure 4 for LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications
Viaarxiv icon

Evolutionary Bin Packing for Memory-Efficient Dataflow Inference Acceleration on FPGA

Mar 24, 2020
Mairin Kroes, Lucian Petrica, Sorin Cotofana, Michaela Blott

Figure 1 for Evolutionary Bin Packing for Memory-Efficient Dataflow Inference Acceleration on FPGA
Figure 2 for Evolutionary Bin Packing for Memory-Efficient Dataflow Inference Acceleration on FPGA
Figure 3 for Evolutionary Bin Packing for Memory-Efficient Dataflow Inference Acceleration on FPGA
Figure 4 for Evolutionary Bin Packing for Memory-Efficient Dataflow Inference Acceleration on FPGA
Viaarxiv icon

Efficient Error-Tolerant Quantized Neural Network Accelerators

Dec 16, 2019
Giulio Gambardella, Johannes Kappauf, Michaela Blott, Christoph Doehring, Martin Kumm, Peter Zipf, Kees Vissers

Figure 1 for Efficient Error-Tolerant Quantized Neural Network Accelerators
Figure 2 for Efficient Error-Tolerant Quantized Neural Network Accelerators
Figure 3 for Efficient Error-Tolerant Quantized Neural Network Accelerators
Figure 4 for Efficient Error-Tolerant Quantized Neural Network Accelerators
Viaarxiv icon

Synetgy: Algorithm-hardware Co-design for ConvNet Accelerators on Embedded FPGAs

Nov 21, 2018
Yifan Yang, Qijing Huang, Bichen Wu, Tianjun Zhang, Liang Ma, Giulio Gambardella, Michaela Blott, Luciano Lavagno, Kees Vissers, John Wawrzynek, Kurt Keutzer

Figure 1 for Synetgy: Algorithm-hardware Co-design for ConvNet Accelerators on Embedded FPGAs
Figure 2 for Synetgy: Algorithm-hardware Co-design for ConvNet Accelerators on Embedded FPGAs
Figure 3 for Synetgy: Algorithm-hardware Co-design for ConvNet Accelerators on Embedded FPGAs
Figure 4 for Synetgy: Algorithm-hardware Co-design for ConvNet Accelerators on Embedded FPGAs
Viaarxiv icon

Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic

Jul 17, 2018
Jiang Su, Nicholas J. Fraser, Giulio Gambardella, Michaela Blott, Gianluca Durelli, David B. Thomas, Philip Leong, Peter Y. K. Cheung

Figure 1 for Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic
Figure 2 for Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic
Figure 3 for Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic
Figure 4 for Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic
Viaarxiv icon

FINN-L: Library Extensions and Design Trade-off Analysis for Variable Precision LSTM Networks on FPGAs

Jul 11, 2018
Vladimir Rybalkin, Alessandro Pappalardo, Muhammad Mohsin Ghaffar, Giulio Gambardella, Norbert Wehn, Michaela Blott

Figure 1 for FINN-L: Library Extensions and Design Trade-off Analysis for Variable Precision LSTM Networks on FPGAs
Figure 2 for FINN-L: Library Extensions and Design Trade-off Analysis for Variable Precision LSTM Networks on FPGAs
Figure 3 for FINN-L: Library Extensions and Design Trade-off Analysis for Variable Precision LSTM Networks on FPGAs
Figure 4 for FINN-L: Library Extensions and Design Trade-off Analysis for Variable Precision LSTM Networks on FPGAs
Viaarxiv icon

SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks

Jul 01, 2018
Julian Faraone, Nicholas Fraser, Michaela Blott, Philip H. W. Leong

Figure 1 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Figure 2 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Figure 3 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Figure 4 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Viaarxiv icon

Scaling Neural Network Performance through Customized Hardware Architectures on Reconfigurable Logic

Jun 26, 2018
Michaela Blott, Thomas B. Preusser, Nicholas Fraser, Giulio Gambardella, Kenneth OBrien, Yaman Umuroglu, Miriam Leeser

Figure 1 for Scaling Neural Network Performance through Customized Hardware Architectures on Reconfigurable Logic
Figure 2 for Scaling Neural Network Performance through Customized Hardware Architectures on Reconfigurable Logic
Figure 3 for Scaling Neural Network Performance through Customized Hardware Architectures on Reconfigurable Logic
Figure 4 for Scaling Neural Network Performance through Customized Hardware Architectures on Reconfigurable Logic
Viaarxiv icon

Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices

Jun 21, 2018
Thomas B. Preußer, Giulio Gambardella, Nicholas Fraser, Michaela Blott

Figure 1 for Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices
Figure 2 for Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices
Figure 3 for Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices
Figure 4 for Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices
Viaarxiv icon