Picture for Michaela Blott

Michaela Blott

Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices

Add code
Jun 21, 2018
Figure 1 for Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices
Figure 2 for Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices
Figure 3 for Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices
Figure 4 for Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices
Viaarxiv icon

Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks

Add code
Oct 10, 2017
Figure 1 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Figure 2 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Figure 3 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Figure 4 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Viaarxiv icon

Scaling Binarized Neural Networks on Reconfigurable Logic

Add code
Jan 27, 2017
Figure 1 for Scaling Binarized Neural Networks on Reconfigurable Logic
Figure 2 for Scaling Binarized Neural Networks on Reconfigurable Logic
Figure 3 for Scaling Binarized Neural Networks on Reconfigurable Logic
Figure 4 for Scaling Binarized Neural Networks on Reconfigurable Logic
Viaarxiv icon

FINN: A Framework for Fast, Scalable Binarized Neural Network Inference

Add code
Dec 01, 2016
Figure 1 for FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
Figure 2 for FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
Figure 3 for FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
Figure 4 for FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
Viaarxiv icon