Picture for Lukas Cavigelli

Lukas Cavigelli

Q-EEGNet: an Energy-Efficient 8-bit Quantized Parallel EEGNet Implementation for Edge Motor-Imagery Brain--Machine Interfaces

Add code
Apr 24, 2020
Figure 1 for Q-EEGNet: an Energy-Efficient 8-bit Quantized Parallel EEGNet Implementation for Edge Motor-Imagery Brain--Machine Interfaces
Figure 2 for Q-EEGNet: an Energy-Efficient 8-bit Quantized Parallel EEGNet Implementation for Edge Motor-Imagery Brain--Machine Interfaces
Figure 3 for Q-EEGNet: an Energy-Efficient 8-bit Quantized Parallel EEGNet Implementation for Edge Motor-Imagery Brain--Machine Interfaces
Figure 4 for Q-EEGNet: an Energy-Efficient 8-bit Quantized Parallel EEGNet Implementation for Edge Motor-Imagery Brain--Machine Interfaces
Viaarxiv icon

RPR: Random Partition Relaxation for Training; Binary and Ternary Weight Neural Networks

Add code
Jan 04, 2020
Figure 1 for RPR: Random Partition Relaxation for Training; Binary and Ternary Weight Neural Networks
Figure 2 for RPR: Random Partition Relaxation for Training; Binary and Ternary Weight Neural Networks
Figure 3 for RPR: Random Partition Relaxation for Training; Binary and Ternary Weight Neural Networks
Viaarxiv icon

HR-SAR-Net: A Deep Neural Network for Urban Scene Segmentation from High-Resolution SAR Data

Add code
Dec 10, 2019
Figure 1 for HR-SAR-Net: A Deep Neural Network for Urban Scene Segmentation from High-Resolution SAR Data
Figure 2 for HR-SAR-Net: A Deep Neural Network for Urban Scene Segmentation from High-Resolution SAR Data
Figure 3 for HR-SAR-Net: A Deep Neural Network for Urban Scene Segmentation from High-Resolution SAR Data
Figure 4 for HR-SAR-Net: A Deep Neural Network for Urban Scene Segmentation from High-Resolution SAR Data
Viaarxiv icon

FANN-on-MCU: An Open-Source Toolkit for Energy-Efficient Neural Network Inference at the Edge of the Internet of Things

Add code
Nov 08, 2019
Figure 1 for FANN-on-MCU: An Open-Source Toolkit for Energy-Efficient Neural Network Inference at the Edge of the Internet of Things
Figure 2 for FANN-on-MCU: An Open-Source Toolkit for Energy-Efficient Neural Network Inference at the Edge of the Internet of Things
Figure 3 for FANN-on-MCU: An Open-Source Toolkit for Energy-Efficient Neural Network Inference at the Edge of the Internet of Things
Figure 4 for FANN-on-MCU: An Open-Source Toolkit for Energy-Efficient Neural Network Inference at the Edge of the Internet of Things
Viaarxiv icon

EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference and Training Accelerators

Add code
Aug 30, 2019
Figure 1 for EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference and Training Accelerators
Figure 2 for EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference and Training Accelerators
Figure 3 for EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference and Training Accelerators
Figure 4 for EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference and Training Accelerators
Viaarxiv icon

Additive Noise Annealing and Approximation Properties of Quantized Neural Networks

Add code
May 24, 2019
Figure 1 for Additive Noise Annealing and Approximation Properties of Quantized Neural Networks
Figure 2 for Additive Noise Annealing and Approximation Properties of Quantized Neural Networks
Figure 3 for Additive Noise Annealing and Approximation Properties of Quantized Neural Networks
Figure 4 for Additive Noise Annealing and Approximation Properties of Quantized Neural Networks
Viaarxiv icon

Extended Bit-Plane Compression for Convolutional Neural Network Accelerators

Add code
Oct 01, 2018
Figure 1 for Extended Bit-Plane Compression for Convolutional Neural Network Accelerators
Figure 2 for Extended Bit-Plane Compression for Convolutional Neural Network Accelerators
Figure 3 for Extended Bit-Plane Compression for Convolutional Neural Network Accelerators
Figure 4 for Extended Bit-Plane Compression for Convolutional Neural Network Accelerators
Viaarxiv icon

CBinfer: Exploiting Frame-to-Frame Locality for Faster Convolutional Network Inference on Video Streams

Add code
Aug 15, 2018
Figure 1 for CBinfer: Exploiting Frame-to-Frame Locality for Faster Convolutional Network Inference on Video Streams
Figure 2 for CBinfer: Exploiting Frame-to-Frame Locality for Faster Convolutional Network Inference on Video Streams
Figure 3 for CBinfer: Exploiting Frame-to-Frame Locality for Faster Convolutional Network Inference on Video Streams
Figure 4 for CBinfer: Exploiting Frame-to-Frame Locality for Faster Convolutional Network Inference on Video Streams
Viaarxiv icon

Hyperdrive: A Systolically Scalable Binary-Weight CNN Inference Engine for mW IoT End-Nodes

Add code
Jun 13, 2018
Figure 1 for Hyperdrive: A Systolically Scalable Binary-Weight CNN Inference Engine for mW IoT End-Nodes
Figure 2 for Hyperdrive: A Systolically Scalable Binary-Weight CNN Inference Engine for mW IoT End-Nodes
Figure 3 for Hyperdrive: A Systolically Scalable Binary-Weight CNN Inference Engine for mW IoT End-Nodes
Figure 4 for Hyperdrive: A Systolically Scalable Binary-Weight CNN Inference Engine for mW IoT End-Nodes
Viaarxiv icon

XNORBIN: A 95 TOp/s/W Hardware Accelerator for Binary Convolutional Neural Networks

Add code
Mar 05, 2018
Figure 1 for XNORBIN: A 95 TOp/s/W Hardware Accelerator for Binary Convolutional Neural Networks
Figure 2 for XNORBIN: A 95 TOp/s/W Hardware Accelerator for Binary Convolutional Neural Networks
Viaarxiv icon