Alert button
Picture for Philip H. W. Leong

Philip H. W. Leong

Alert button

The Wyner Variational Autoencoder for Unsupervised Multi-Layer Wireless Fingerprinting

Add code
Bookmark button
Alert button
Mar 28, 2023
Teng-Hui Huang, Thilini Dahanayaka, Kanchana Thilakarathna, Philip H. W. Leong, Hesham El Gamal

Figure 1 for The Wyner Variational Autoencoder for Unsupervised Multi-Layer Wireless Fingerprinting
Figure 2 for The Wyner Variational Autoencoder for Unsupervised Multi-Layer Wireless Fingerprinting
Figure 3 for The Wyner Variational Autoencoder for Unsupervised Multi-Layer Wireless Fingerprinting
Figure 4 for The Wyner Variational Autoencoder for Unsupervised Multi-Layer Wireless Fingerprinting
Viaarxiv icon

NITI: Training Integer Neural Networks Using Integer-only Arithmetic

Add code
Bookmark button
Alert button
Sep 28, 2020
Maolin Wang, Seyedramin Rasoulinezhad, Philip H. W. Leong, Hayden K. H. So

Figure 1 for NITI: Training Integer Neural Networks Using Integer-only Arithmetic
Figure 2 for NITI: Training Integer Neural Networks Using Integer-only Arithmetic
Figure 3 for NITI: Training Integer Neural Networks Using Integer-only Arithmetic
Figure 4 for NITI: Training Integer Neural Networks Using Integer-only Arithmetic
Viaarxiv icon

MajorityNets: BNNs Utilising Approximate Popcount for Improved Efficiency

Add code
Bookmark button
Alert button
Feb 27, 2020
Seyedramin Rasoulinezhad, Sean Fox, Hao Zhou, Lingli Wang, David Boland, Philip H. W. Leong

Figure 1 for MajorityNets: BNNs Utilising Approximate Popcount for Improved Efficiency
Figure 2 for MajorityNets: BNNs Utilising Approximate Popcount for Improved Efficiency
Figure 3 for MajorityNets: BNNs Utilising Approximate Popcount for Improved Efficiency
Figure 4 for MajorityNets: BNNs Utilising Approximate Popcount for Improved Efficiency
Viaarxiv icon

AddNet: Deep Neural Networks Using FPGA-Optimized Multipliers

Add code
Bookmark button
Alert button
Nov 19, 2019
Julian Faraone, Martin Kumm, Martin Hardieck, Peter Zipf, Xueyuan Liu, David Boland, Philip H. W. Leong

Figure 1 for AddNet: Deep Neural Networks Using FPGA-Optimized Multipliers
Figure 2 for AddNet: Deep Neural Networks Using FPGA-Optimized Multipliers
Figure 3 for AddNet: Deep Neural Networks Using FPGA-Optimized Multipliers
Figure 4 for AddNet: Deep Neural Networks Using FPGA-Optimized Multipliers
Viaarxiv icon

Unrolling Ternary Neural Networks

Add code
Bookmark button
Alert button
Sep 09, 2019
Stephen Tridgell, Martin Kumm, Martin Hardieck, David Boland, Duncan Moss, Peter Zipf, Philip H. W. Leong

Figure 1 for Unrolling Ternary Neural Networks
Figure 2 for Unrolling Ternary Neural Networks
Figure 3 for Unrolling Ternary Neural Networks
Figure 4 for Unrolling Ternary Neural Networks
Viaarxiv icon

SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks

Add code
Bookmark button
Alert button
Jul 01, 2018
Julian Faraone, Nicholas Fraser, Michaela Blott, Philip H. W. Leong

Figure 1 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Figure 2 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Figure 3 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Figure 4 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Viaarxiv icon

Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks

Add code
Bookmark button
Alert button
Oct 10, 2017
Julian Faraone, Nicholas Fraser, Giulio Gambardella, Michaela Blott, Philip H. W. Leong

Figure 1 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Figure 2 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Figure 3 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Figure 4 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Viaarxiv icon