Alert button
Picture for Gianna Paulin

Gianna Paulin

Alert button

ITA: An Energy-Efficient Attention and Softmax Accelerator for Quantized Transformers

Add code
Bookmark button
Alert button
Jul 10, 2023
Gamze İslamoğlu, Moritz Scherer, Gianna Paulin, Tim Fischer, Victor J. B. Jung, Angelo Garofalo, Luca Benini

Figure 1 for ITA: An Energy-Efficient Attention and Softmax Accelerator for Quantized Transformers
Figure 2 for ITA: An Energy-Efficient Attention and Softmax Accelerator for Quantized Transformers
Figure 3 for ITA: An Energy-Efficient Attention and Softmax Accelerator for Quantized Transformers
Figure 4 for ITA: An Energy-Efficient Attention and Softmax Accelerator for Quantized Transformers
Viaarxiv icon

Marsellus: A Heterogeneous RISC-V AI-IoT End-Node SoC with 2-to-8b DNN Acceleration and 30%-Boost Adaptive Body Biasing

Add code
Bookmark button
Alert button
May 15, 2023
Francesco Conti, Gianna Paulin, Davide Rossi, Alfio Di Mauro, Georg Rutishauser, Gianmarco Ottavi, Manuel Eggimann, Hayate Okuhara, Luca Benini

Figure 1 for Marsellus: A Heterogeneous RISC-V AI-IoT End-Node SoC with 2-to-8b DNN Acceleration and 30%-Boost Adaptive Body Biasing
Figure 2 for Marsellus: A Heterogeneous RISC-V AI-IoT End-Node SoC with 2-to-8b DNN Acceleration and 30%-Boost Adaptive Body Biasing
Figure 3 for Marsellus: A Heterogeneous RISC-V AI-IoT End-Node SoC with 2-to-8b DNN Acceleration and 30%-Boost Adaptive Body Biasing
Figure 4 for Marsellus: A Heterogeneous RISC-V AI-IoT End-Node SoC with 2-to-8b DNN Acceleration and 30%-Boost Adaptive Body Biasing
Viaarxiv icon

Vau da muntanialas: Energy-efficient multi-die scalable acceleration of RNN inference

Add code
Bookmark button
Alert button
Feb 14, 2022
Gianna Paulin, Francesco Conti, Lukas Cavigelli, Luca Benini

Figure 1 for Vau da muntanialas: Energy-efficient multi-die scalable acceleration of RNN inference
Figure 2 for Vau da muntanialas: Energy-efficient multi-die scalable acceleration of RNN inference
Figure 3 for Vau da muntanialas: Energy-efficient multi-die scalable acceleration of RNN inference
Figure 4 for Vau da muntanialas: Energy-efficient multi-die scalable acceleration of RNN inference
Viaarxiv icon

Chipmunk: A Systolically Scalable 0.9 mm${}^2$, 3.08 Gop/s/mW @ 1.2 mW Accelerator for Near-Sensor Recurrent Neural Network Inference

Add code
Bookmark button
Alert button
Feb 20, 2018
Francesco Conti, Lukas Cavigelli, Gianna Paulin, Igor Susmelj, Luca Benini

Figure 1 for Chipmunk: A Systolically Scalable 0.9 mm${}^2$, 3.08 Gop/s/mW @ 1.2 mW Accelerator for Near-Sensor Recurrent Neural Network Inference
Figure 2 for Chipmunk: A Systolically Scalable 0.9 mm${}^2$, 3.08 Gop/s/mW @ 1.2 mW Accelerator for Near-Sensor Recurrent Neural Network Inference
Figure 3 for Chipmunk: A Systolically Scalable 0.9 mm${}^2$, 3.08 Gop/s/mW @ 1.2 mW Accelerator for Near-Sensor Recurrent Neural Network Inference
Figure 4 for Chipmunk: A Systolically Scalable 0.9 mm${}^2$, 3.08 Gop/s/mW @ 1.2 mW Accelerator for Near-Sensor Recurrent Neural Network Inference
Viaarxiv icon