Alert button
Picture for Francesco Conti

Francesco Conti

Alert button

Lightweight Neural Architecture Search for Temporal Convolutional Networks at the Edge

Add code
Bookmark button
Alert button
Jan 24, 2023
Matteo Risso, Alessio Burrello, Francesco Conti, Lorenzo Lamberti, Yukai Chen, Luca Benini, Enrico Macii, Massimo Poncino, Daniele Jahier Pagliari

Figure 1 for Lightweight Neural Architecture Search for Temporal Convolutional Networks at the Edge
Figure 2 for Lightweight Neural Architecture Search for Temporal Convolutional Networks at the Edge
Figure 3 for Lightweight Neural Architecture Search for Temporal Convolutional Networks at the Edge
Figure 4 for Lightweight Neural Architecture Search for Temporal Convolutional Networks at the Edge
Viaarxiv icon

RedMule: A Mixed-Precision Matrix-Matrix Operation Engine for Flexible and Energy-Efficient On-Chip Linear Algebra and TinyML Training Acceleration

Add code
Bookmark button
Alert button
Jan 10, 2023
Yvan Tortorella, Luca Bertaccini, Luca Benini, Davide Rossi, Francesco Conti

Figure 1 for RedMule: A Mixed-Precision Matrix-Matrix Operation Engine for Flexible and Energy-Efficient On-Chip Linear Algebra and TinyML Training Acceleration
Figure 2 for RedMule: A Mixed-Precision Matrix-Matrix Operation Engine for Flexible and Energy-Efficient On-Chip Linear Algebra and TinyML Training Acceleration
Figure 3 for RedMule: A Mixed-Precision Matrix-Matrix Operation Engine for Flexible and Energy-Efficient On-Chip Linear Algebra and TinyML Training Acceleration
Figure 4 for RedMule: A Mixed-Precision Matrix-Matrix Operation Engine for Flexible and Energy-Efficient On-Chip Linear Algebra and TinyML Training Acceleration
Viaarxiv icon

Pruning In Time (PIT): A Lightweight Network Architecture Optimizer for Temporal Convolutional Networks

Add code
Bookmark button
Alert button
Mar 28, 2022
Matteo Risso, Alessio Burrello, Daniele Jahier Pagliari, Francesco Conti, Lorenzo Lamberti, Enrico Macii, Luca Benini, Massimo Poncino

Figure 1 for Pruning In Time (PIT): A Lightweight Network Architecture Optimizer for Temporal Convolutional Networks
Figure 2 for Pruning In Time (PIT): A Lightweight Network Architecture Optimizer for Temporal Convolutional Networks
Figure 3 for Pruning In Time (PIT): A Lightweight Network Architecture Optimizer for Temporal Convolutional Networks
Figure 4 for Pruning In Time (PIT): A Lightweight Network Architecture Optimizer for Temporal Convolutional Networks
Viaarxiv icon

TCN Mapping Optimization for Ultra-Low Power Time-Series Edge Inference

Add code
Bookmark button
Alert button
Mar 24, 2022
Alessio Burrello, Alberto Dequino, Daniele Jahier Pagliari, Francesco Conti, Marcello Zanghieri, Enrico Macii, Luca Benini, Massimo Poncino

Figure 1 for TCN Mapping Optimization for Ultra-Low Power Time-Series Edge Inference
Figure 2 for TCN Mapping Optimization for Ultra-Low Power Time-Series Edge Inference
Figure 3 for TCN Mapping Optimization for Ultra-Low Power Time-Series Edge Inference
Figure 4 for TCN Mapping Optimization for Ultra-Low Power Time-Series Edge Inference
Viaarxiv icon

Vau da muntanialas: Energy-efficient multi-die scalable acceleration of RNN inference

Add code
Bookmark button
Alert button
Feb 14, 2022
Gianna Paulin, Francesco Conti, Lukas Cavigelli, Luca Benini

Figure 1 for Vau da muntanialas: Energy-efficient multi-die scalable acceleration of RNN inference
Figure 2 for Vau da muntanialas: Energy-efficient multi-die scalable acceleration of RNN inference
Figure 3 for Vau da muntanialas: Energy-efficient multi-die scalable acceleration of RNN inference
Figure 4 for Vau da muntanialas: Energy-efficient multi-die scalable acceleration of RNN inference
Viaarxiv icon

A Heterogeneous In-Memory Computing Cluster For Flexible End-to-End Inference of Real-World Deep Neural Networks

Add code
Bookmark button
Alert button
Jan 04, 2022
Angelo Garofalo, Gianmarco Ottavi, Francesco Conti, Geethan Karunaratne, Irem Boybat, Luca Benini, Davide Rossi

Figure 1 for A Heterogeneous In-Memory Computing Cluster For Flexible End-to-End Inference of Real-World Deep Neural Networks
Figure 2 for A Heterogeneous In-Memory Computing Cluster For Flexible End-to-End Inference of Real-World Deep Neural Networks
Figure 3 for A Heterogeneous In-Memory Computing Cluster For Flexible End-to-End Inference of Real-World Deep Neural Networks
Figure 4 for A Heterogeneous In-Memory Computing Cluster For Flexible End-to-End Inference of Real-World Deep Neural Networks
Viaarxiv icon

A TinyML Platform for On-Device Continual Learning with Quantized Latent Replays

Add code
Bookmark button
Alert button
Oct 20, 2021
Leonardo Ravaglia, Manuele Rusci, Davide Nadalini, Alessandro Capotondi, Francesco Conti, Luca Benini

Figure 1 for A TinyML Platform for On-Device Continual Learning with Quantized Latent Replays
Figure 2 for A TinyML Platform for On-Device Continual Learning with Quantized Latent Replays
Figure 3 for A TinyML Platform for On-Device Continual Learning with Quantized Latent Replays
Figure 4 for A TinyML Platform for On-Device Continual Learning with Quantized Latent Replays
Viaarxiv icon

Vega: A 10-Core SoC for IoT End-Nodes with DNN Acceleration and Cognitive Wake-Up From MRAM-Based State-Retentive Sleep Mode

Add code
Bookmark button
Alert button
Oct 18, 2021
Davide Rossi, Francesco Conti, Manuel Eggimann, Alfio Di Mauro, Giuseppe Tagliavini, Stefan Mach, Marco Guermandi, Antonio Pullini, Igor Loi, Jie Chen, Eric Flamand, Luca Benini

Figure 1 for Vega: A 10-Core SoC for IoT End-Nodes with DNN Acceleration and Cognitive Wake-Up From MRAM-Based State-Retentive Sleep Mode
Figure 2 for Vega: A 10-Core SoC for IoT End-Nodes with DNN Acceleration and Cognitive Wake-Up From MRAM-Based State-Retentive Sleep Mode
Figure 3 for Vega: A 10-Core SoC for IoT End-Nodes with DNN Acceleration and Cognitive Wake-Up From MRAM-Based State-Retentive Sleep Mode
Figure 4 for Vega: A 10-Core SoC for IoT End-Nodes with DNN Acceleration and Cognitive Wake-Up From MRAM-Based State-Retentive Sleep Mode
Viaarxiv icon

Fully Onboard AI-powered Human-Drone Pose Estimation on Ultra-low Power Autonomous Flying Nano-UAVs

Add code
Bookmark button
Alert button
Mar 19, 2021
Daniele Palossi, Nicky Zimmerman, Alessio Burrello, Francesco Conti, Hanna Müller, Luca Maria Gambardella, Luca Benini, Alessandro Giusti, Jérôme Guzzi

Figure 1 for Fully Onboard AI-powered Human-Drone Pose Estimation on Ultra-low Power Autonomous Flying Nano-UAVs
Figure 2 for Fully Onboard AI-powered Human-Drone Pose Estimation on Ultra-low Power Autonomous Flying Nano-UAVs
Figure 3 for Fully Onboard AI-powered Human-Drone Pose Estimation on Ultra-low Power Autonomous Flying Nano-UAVs
Figure 4 for Fully Onboard AI-powered Human-Drone Pose Estimation on Ultra-low Power Autonomous Flying Nano-UAVs
Viaarxiv icon

Multiscale Anisotropic Harmonic Filters on non Euclidean domains

Add code
Bookmark button
Alert button
Feb 01, 2021
Francesco Conti, Gaetano Scarano, Stefania Colonnese

Figure 1 for Multiscale Anisotropic Harmonic Filters on non Euclidean domains
Figure 2 for Multiscale Anisotropic Harmonic Filters on non Euclidean domains
Figure 3 for Multiscale Anisotropic Harmonic Filters on non Euclidean domains
Figure 4 for Multiscale Anisotropic Harmonic Filters on non Euclidean domains
Viaarxiv icon