Alert button
Picture for Nicholas Fraser

Nicholas Fraser

Alert button

SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks

Add code
Bookmark button
Alert button
Jul 01, 2018
Julian Faraone, Nicholas Fraser, Michaela Blott, Philip H. W. Leong

Figure 1 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Figure 2 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Figure 3 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Figure 4 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Viaarxiv icon

Scaling Neural Network Performance through Customized Hardware Architectures on Reconfigurable Logic

Add code
Bookmark button
Alert button
Jun 26, 2018
Michaela Blott, Thomas B. Preusser, Nicholas Fraser, Giulio Gambardella, Kenneth OBrien, Yaman Umuroglu, Miriam Leeser

Figure 1 for Scaling Neural Network Performance through Customized Hardware Architectures on Reconfigurable Logic
Figure 2 for Scaling Neural Network Performance through Customized Hardware Architectures on Reconfigurable Logic
Figure 3 for Scaling Neural Network Performance through Customized Hardware Architectures on Reconfigurable Logic
Figure 4 for Scaling Neural Network Performance through Customized Hardware Architectures on Reconfigurable Logic
Viaarxiv icon

Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices

Add code
Bookmark button
Alert button
Jun 21, 2018
Thomas B. Preußer, Giulio Gambardella, Nicholas Fraser, Michaela Blott

Figure 1 for Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices
Figure 2 for Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices
Figure 3 for Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices
Figure 4 for Inference of Quantized Neural Networks on Heterogeneous All-Programmable Devices
Viaarxiv icon

Quantizing Convolutional Neural Networks for Low-Power High-Throughput Inference Engines

Add code
Bookmark button
Alert button
May 21, 2018
Sean O. Settle, Manasa Bollavaram, Paolo D'Alberto, Elliott Delaye, Oscar Fernandez, Nicholas Fraser, Aaron Ng, Ashish Sirasao, Michael Wu

Figure 1 for Quantizing Convolutional Neural Networks for Low-Power High-Throughput Inference Engines
Figure 2 for Quantizing Convolutional Neural Networks for Low-Power High-Throughput Inference Engines
Figure 3 for Quantizing Convolutional Neural Networks for Low-Power High-Throughput Inference Engines
Figure 4 for Quantizing Convolutional Neural Networks for Low-Power High-Throughput Inference Engines
Viaarxiv icon

Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks

Add code
Bookmark button
Alert button
Oct 10, 2017
Julian Faraone, Nicholas Fraser, Giulio Gambardella, Michaela Blott, Philip H. W. Leong

Figure 1 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Figure 2 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Figure 3 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Figure 4 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Viaarxiv icon