Picture for Julian Faraone

Julian Faraone

Super Efficient Neural Network for Compression Artifacts Reduction and Super Resolution

Add code
Jan 26, 2024
Figure 1 for Super Efficient Neural Network for Compression Artifacts Reduction and Super Resolution
Figure 2 for Super Efficient Neural Network for Compression Artifacts Reduction and Super Resolution
Figure 3 for Super Efficient Neural Network for Compression Artifacts Reduction and Super Resolution
Figure 4 for Super Efficient Neural Network for Compression Artifacts Reduction and Super Resolution
Viaarxiv icon

AddNet: Deep Neural Networks Using FPGA-Optimized Multipliers

Add code
Nov 19, 2019
Figure 1 for AddNet: Deep Neural Networks Using FPGA-Optimized Multipliers
Figure 2 for AddNet: Deep Neural Networks Using FPGA-Optimized Multipliers
Figure 3 for AddNet: Deep Neural Networks Using FPGA-Optimized Multipliers
Figure 4 for AddNet: Deep Neural Networks Using FPGA-Optimized Multipliers
Viaarxiv icon

SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks

Add code
Jul 01, 2018
Figure 1 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Figure 2 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Figure 3 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Figure 4 for SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Viaarxiv icon

Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks

Add code
Oct 10, 2017
Figure 1 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Figure 2 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Figure 3 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Figure 4 for Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
Viaarxiv icon