Alert button
Picture for Alessandro Capotondi

Alessandro Capotondi

Alert button

Heterogeneous Encoders Scaling In The Transformer For Neural Machine Translation

Add code
Bookmark button
Alert button
Dec 26, 2023
Jia Cheng Hu, Roberto Cavicchioli, Giulia Berardinelli, Alessandro Capotondi

Viaarxiv icon

A request for clarity over the End of Sequence token in the Self-Critical Sequence Training

Add code
Bookmark button
Alert button
May 20, 2023
Jia Cheng Hu, Roberto Cavicchioli, Alessandro Capotondi

Figure 1 for A request for clarity over the End of Sequence token in the Self-Critical Sequence Training
Figure 2 for A request for clarity over the End of Sequence token in the Self-Critical Sequence Training
Figure 3 for A request for clarity over the End of Sequence token in the Self-Critical Sequence Training
Figure 4 for A request for clarity over the End of Sequence token in the Self-Critical Sequence Training
Viaarxiv icon

ExpansionNet v2: Block Static Expansion in fast end to end training for Image Captioning

Add code
Bookmark button
Alert button
Aug 19, 2022
Jia Cheng Hu, Roberto Cavicchioli, Alessandro Capotondi

Figure 1 for ExpansionNet v2: Block Static Expansion in fast end to end training for Image Captioning
Figure 2 for ExpansionNet v2: Block Static Expansion in fast end to end training for Image Captioning
Figure 3 for ExpansionNet v2: Block Static Expansion in fast end to end training for Image Captioning
Figure 4 for ExpansionNet v2: Block Static Expansion in fast end to end training for Image Captioning
Viaarxiv icon

A TinyML Platform for On-Device Continual Learning with Quantized Latent Replays

Add code
Bookmark button
Alert button
Oct 20, 2021
Leonardo Ravaglia, Manuele Rusci, Davide Nadalini, Alessandro Capotondi, Francesco Conti, Luca Benini

Figure 1 for A TinyML Platform for On-Device Continual Learning with Quantized Latent Replays
Figure 2 for A TinyML Platform for On-Device Continual Learning with Quantized Latent Replays
Figure 3 for A TinyML Platform for On-Device Continual Learning with Quantized Latent Replays
Figure 4 for A TinyML Platform for On-Device Continual Learning with Quantized Latent Replays
Viaarxiv icon

Leveraging Automated Mixed-Low-Precision Quantization for tiny edge microcontrollers

Add code
Bookmark button
Alert button
Aug 12, 2020
Manuele Rusci, Marco Fariselli, Alessandro Capotondi, Luca Benini

Figure 1 for Leveraging Automated Mixed-Low-Precision Quantization for tiny edge microcontrollers
Figure 2 for Leveraging Automated Mixed-Low-Precision Quantization for tiny edge microcontrollers
Figure 3 for Leveraging Automated Mixed-Low-Precision Quantization for tiny edge microcontrollers
Figure 4 for Leveraging Automated Mixed-Low-Precision Quantization for tiny edge microcontrollers
Viaarxiv icon

Robust navigation with tinyML for autonomous mini-vehicles

Add code
Bookmark button
Alert button
Jul 01, 2020
Miguel de Prado, Romain Donze, Alessandro Capotondi, Manuele Rusci, Serge Monnerat, Luca Benini and, Nuria Pazos

Figure 1 for Robust navigation with tinyML for autonomous mini-vehicles
Figure 2 for Robust navigation with tinyML for autonomous mini-vehicles
Figure 3 for Robust navigation with tinyML for autonomous mini-vehicles
Figure 4 for Robust navigation with tinyML for autonomous mini-vehicles
Viaarxiv icon

Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On Microcontrollers

Add code
Bookmark button
Alert button
May 30, 2019
Manuele Rusci, Alessandro Capotondi, Luca Benini

Figure 1 for Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On Microcontrollers
Figure 2 for Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On Microcontrollers
Figure 3 for Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On Microcontrollers
Figure 4 for Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On Microcontrollers
Viaarxiv icon

NEURAghe: Exploiting CPU-FPGA Synergies for Efficient and Flexible CNN Inference Acceleration on Zynq SoCs

Add code
Bookmark button
Alert button
Dec 04, 2017
Paolo Meloni, Alessandro Capotondi, Gianfranco Deriu, Michele Brian, Francesco Conti, Davide Rossi, Luigi Raffo, Luca Benini

Figure 1 for NEURAghe: Exploiting CPU-FPGA Synergies for Efficient and Flexible CNN Inference Acceleration on Zynq SoCs
Figure 2 for NEURAghe: Exploiting CPU-FPGA Synergies for Efficient and Flexible CNN Inference Acceleration on Zynq SoCs
Figure 3 for NEURAghe: Exploiting CPU-FPGA Synergies for Efficient and Flexible CNN Inference Acceleration on Zynq SoCs
Figure 4 for NEURAghe: Exploiting CPU-FPGA Synergies for Efficient and Flexible CNN Inference Acceleration on Zynq SoCs
Viaarxiv icon