Picture for Maurizio Denna

Maurizio Denna

Efficient and Accurate Quantized Image Super-Resolution on Mobile NPUs, Mobile AI & AIM 2022 challenge: Report

Add code
Nov 07, 2022
Figure 1 for Efficient and Accurate Quantized Image Super-Resolution on Mobile NPUs, Mobile AI & AIM 2022 challenge: Report
Figure 2 for Efficient and Accurate Quantized Image Super-Resolution on Mobile NPUs, Mobile AI & AIM 2022 challenge: Report
Figure 3 for Efficient and Accurate Quantized Image Super-Resolution on Mobile NPUs, Mobile AI & AIM 2022 challenge: Report
Figure 4 for Efficient and Accurate Quantized Image Super-Resolution on Mobile NPUs, Mobile AI & AIM 2022 challenge: Report
Viaarxiv icon

Real-Time Quantized Image Super-Resolution on Mobile NPUs, Mobile AI 2021 Challenge: Report

Add code
May 17, 2021
Figure 1 for Real-Time Quantized Image Super-Resolution on Mobile NPUs, Mobile AI 2021 Challenge: Report
Figure 2 for Real-Time Quantized Image Super-Resolution on Mobile NPUs, Mobile AI 2021 Challenge: Report
Figure 3 for Real-Time Quantized Image Super-Resolution on Mobile NPUs, Mobile AI 2021 Challenge: Report
Figure 4 for Real-Time Quantized Image Super-Resolution on Mobile NPUs, Mobile AI 2021 Challenge: Report
Viaarxiv icon

Automated Design Space Exploration for optimised Deployment of DNN on Arm Cortex-A CPUs

Add code
Jun 09, 2020
Figure 1 for Automated Design Space Exploration for optimised Deployment of DNN on Arm Cortex-A CPUs
Figure 2 for Automated Design Space Exploration for optimised Deployment of DNN on Arm Cortex-A CPUs
Figure 3 for Automated Design Space Exploration for optimised Deployment of DNN on Arm Cortex-A CPUs
Figure 4 for Automated Design Space Exploration for optimised Deployment of DNN on Arm Cortex-A CPUs
Viaarxiv icon

QUENN: QUantization Engine for low-power Neural Networks

Add code
Nov 14, 2018
Figure 1 for QUENN: QUantization Engine for low-power Neural Networks
Figure 2 for QUENN: QUantization Engine for low-power Neural Networks
Figure 3 for QUENN: QUantization Engine for low-power Neural Networks
Figure 4 for QUENN: QUantization Engine for low-power Neural Networks
Viaarxiv icon