Alert button
Picture for Alessandro Pappalardo

Alessandro Pappalardo

Alert button

A2Q+: Improving Accumulator-Aware Weight Quantization

Add code
Bookmark button
Alert button
Jan 19, 2024
Ian Colbert, Alessandro Pappalardo, Jakoba Petri-Koenig, Yaman Umuroglu

Viaarxiv icon

Post-Training Quantization with Low-precision Minifloats and Integers on FPGAs

Add code
Bookmark button
Alert button
Nov 21, 2023
Shivam Aggarwal, Alessandro Pappalardo, Hans Jakob Damsgaard, Giuseppe Franco, Thomas B. Preußer, Michaela Blott, Tulika Mitra

Viaarxiv icon

A2Q: Accumulator-Aware Quantization with Guaranteed Overflow Avoidance

Add code
Bookmark button
Alert button
Aug 25, 2023
Ian Colbert, Alessandro Pappalardo, Jakoba Petri-Koenig

Figure 1 for A2Q: Accumulator-Aware Quantization with Guaranteed Overflow Avoidance
Figure 2 for A2Q: Accumulator-Aware Quantization with Guaranteed Overflow Avoidance
Figure 3 for A2Q: Accumulator-Aware Quantization with Guaranteed Overflow Avoidance
Figure 4 for A2Q: Accumulator-Aware Quantization with Guaranteed Overflow Avoidance
Viaarxiv icon

Quantized Neural Networks for Low-Precision Accumulation with Guaranteed Overflow Avoidance

Add code
Bookmark button
Alert button
Jan 31, 2023
Ian Colbert, Alessandro Pappalardo, Jakoba Petri-Koenig

Figure 1 for Quantized Neural Networks for Low-Precision Accumulation with Guaranteed Overflow Avoidance
Figure 2 for Quantized Neural Networks for Low-Precision Accumulation with Guaranteed Overflow Avoidance
Figure 3 for Quantized Neural Networks for Low-Precision Accumulation with Guaranteed Overflow Avoidance
Figure 4 for Quantized Neural Networks for Low-Precision Accumulation with Guaranteed Overflow Avoidance
Viaarxiv icon

QONNX: Representing Arbitrary-Precision Quantized Neural Networks

Add code
Bookmark button
Alert button
Jun 17, 2022
Alessandro Pappalardo, Yaman Umuroglu, Michaela Blott, Jovan Mitrevski, Ben Hawks, Nhan Tran, Vladimir Loncar, Sioni Summers, Hendrik Borras, Jules Muhizi, Matthew Trahms, Shih-Chieh Hsu, Scott Hauck, Javier Duarte

Figure 1 for QONNX: Representing Arbitrary-Precision Quantized Neural Networks
Figure 2 for QONNX: Representing Arbitrary-Precision Quantized Neural Networks
Figure 3 for QONNX: Representing Arbitrary-Precision Quantized Neural Networks
Figure 4 for QONNX: Representing Arbitrary-Precision Quantized Neural Networks
Viaarxiv icon

Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference

Add code
Bookmark button
Alert button
Feb 22, 2021
Benjamin Hawks, Javier Duarte, Nicholas J. Fraser, Alessandro Pappalardo, Nhan Tran, Yaman Umuroglu

Figure 1 for Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference
Figure 2 for Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference
Figure 3 for Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference
Figure 4 for Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference
Viaarxiv icon

FINN-L: Library Extensions and Design Trade-off Analysis for Variable Precision LSTM Networks on FPGAs

Add code
Bookmark button
Alert button
Jul 11, 2018
Vladimir Rybalkin, Alessandro Pappalardo, Muhammad Mohsin Ghaffar, Giulio Gambardella, Norbert Wehn, Michaela Blott

Figure 1 for FINN-L: Library Extensions and Design Trade-off Analysis for Variable Precision LSTM Networks on FPGAs
Figure 2 for FINN-L: Library Extensions and Design Trade-off Analysis for Variable Precision LSTM Networks on FPGAs
Figure 3 for FINN-L: Library Extensions and Design Trade-off Analysis for Variable Precision LSTM Networks on FPGAs
Figure 4 for FINN-L: Library Extensions and Design Trade-off Analysis for Variable Precision LSTM Networks on FPGAs
Viaarxiv icon