Alert button
Picture for Abu Sebastian

Abu Sebastian

Alert button

IBM Research - Zurich

Benchmarking energy consumption and latency for neuromorphic computing in condensed matter and particle physics

Add code
Bookmark button
Alert button
Sep 21, 2022
Dominique J. Kösters, Bryan A. Kortman, Irem Boybat, Elena Ferro, Sagar Dolas, Roberto de Austri, Johan Kwisthout, Hans Hilgenkamp, Theo Rasing, Heike Riel, Abu Sebastian, Sascha Caron, Johan H. Mentink

Figure 1 for Benchmarking energy consumption and latency for neuromorphic computing in condensed matter and particle physics
Figure 2 for Benchmarking energy consumption and latency for neuromorphic computing in condensed matter and particle physics
Figure 3 for Benchmarking energy consumption and latency for neuromorphic computing in condensed matter and particle physics
Figure 4 for Benchmarking energy consumption and latency for neuromorphic computing in condensed matter and particle physics
Viaarxiv icon

In-memory Realization of In-situ Few-shot Continual Learning with a Dynamically Evolving Explicit Memory

Add code
Bookmark button
Alert button
Jul 14, 2022
Geethan Karunaratne, Michael Hersche, Jovin Langenegger, Giovanni Cherubini, Manuel Le Gallo-Bourdeau, Urs Egger, Kevin Brew, Sam Choi, INJO OK, Mary Claire Silvestre, Ning Li, Nicole Saulnier, Victor Chan, Ishtiaq Ahsan, Vijay Narayanan, Luca Benini, Abu Sebastian, Abbas Rahimi

Figure 1 for In-memory Realization of In-situ Few-shot Continual Learning with a Dynamically Evolving Explicit Memory
Figure 2 for In-memory Realization of In-situ Few-shot Continual Learning with a Dynamically Evolving Explicit Memory
Figure 3 for In-memory Realization of In-situ Few-shot Continual Learning with a Dynamically Evolving Explicit Memory
Figure 4 for In-memory Realization of In-situ Few-shot Continual Learning with a Dynamically Evolving Explicit Memory
Viaarxiv icon

Constrained Few-shot Class-incremental Learning

Add code
Bookmark button
Alert button
Mar 30, 2022
Michael Hersche, Geethan Karunaratne, Giovanni Cherubini, Luca Benini, Abu Sebastian, Abbas Rahimi

Figure 1 for Constrained Few-shot Class-incremental Learning
Figure 2 for Constrained Few-shot Class-incremental Learning
Figure 3 for Constrained Few-shot Class-incremental Learning
Figure 4 for Constrained Few-shot Class-incremental Learning
Viaarxiv icon

Generalized Key-Value Memory to Flexibly Adjust Redundancy in Memory-Augmented Networks

Add code
Bookmark button
Alert button
Mar 11, 2022
Denis Kleyko, Geethan Karunaratne, Jan M. Rabaey, Abu Sebastian, Abbas Rahimi

Figure 1 for Generalized Key-Value Memory to Flexibly Adjust Redundancy in Memory-Augmented Networks
Figure 2 for Generalized Key-Value Memory to Flexibly Adjust Redundancy in Memory-Augmented Networks
Figure 3 for Generalized Key-Value Memory to Flexibly Adjust Redundancy in Memory-Augmented Networks
Figure 4 for Generalized Key-Value Memory to Flexibly Adjust Redundancy in Memory-Augmented Networks
Viaarxiv icon

A Neuro-vector-symbolic Architecture for Solving Raven's Progressive Matrices

Add code
Bookmark button
Alert button
Mar 09, 2022
Michael Hersche, Mustafa Zeqiri, Luca Benini, Abu Sebastian, Abbas Rahimi

Figure 1 for A Neuro-vector-symbolic Architecture for Solving Raven's Progressive Matrices
Figure 2 for A Neuro-vector-symbolic Architecture for Solving Raven's Progressive Matrices
Figure 3 for A Neuro-vector-symbolic Architecture for Solving Raven's Progressive Matrices
Figure 4 for A Neuro-vector-symbolic Architecture for Solving Raven's Progressive Matrices
Viaarxiv icon

AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator

Add code
Bookmark button
Alert button
Nov 10, 2021
Chuteng Zhou, Fernando Garcia Redondo, Julian Büchel, Irem Boybat, Xavier Timoneda Comas, S. R. Nandakumar, Shidhartha Das, Abu Sebastian, Manuel Le Gallo, Paul N. Whatmough

Figure 1 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Figure 2 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Figure 3 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Figure 4 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Viaarxiv icon

A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays

Add code
Bookmark button
Alert button
Apr 05, 2021
Malte J. Rasch, Diego Moreda, Tayfun Gokmen, Manuel Le Gallo, Fabio Carta, Cindy Goldberg, Kaoutar El Maghraoui, Abu Sebastian, Vijay Narayanan

Figure 1 for A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays
Figure 2 for A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays
Figure 3 for A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays
Figure 4 for A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays
Viaarxiv icon

Robust High-dimensional Memory-augmented Neural Networks

Add code
Bookmark button
Alert button
Oct 05, 2020
Geethan Karunaratne, Manuel Schmuck, Manuel Le Gallo, Giovanni Cherubini, Luca Benini, Abu Sebastian, Abbas Rahimi

Figure 1 for Robust High-dimensional Memory-augmented Neural Networks
Figure 2 for Robust High-dimensional Memory-augmented Neural Networks
Figure 3 for Robust High-dimensional Memory-augmented Neural Networks
Figure 4 for Robust High-dimensional Memory-augmented Neural Networks
Viaarxiv icon

Short-term synaptic plasticity optimally models continuous environments

Add code
Bookmark button
Alert button
Sep 15, 2020
Timoleon Moraitis, Abu Sebastian, Evangelos Eleftheriou

Figure 1 for Short-term synaptic plasticity optimally models continuous environments
Figure 2 for Short-term synaptic plasticity optimally models continuous environments
Figure 3 for Short-term synaptic plasticity optimally models continuous environments
Figure 4 for Short-term synaptic plasticity optimally models continuous environments
Viaarxiv icon

Memristors -- from In-memory computing, Deep Learning Acceleration, Spiking Neural Networks, to the Future of Neuromorphic and Bio-inspired Computing

Add code
Bookmark button
Alert button
Apr 30, 2020
Adnan Mehonic, Abu Sebastian, Bipin Rajendran, Osvaldo Simeone, Eleni Vasilaki, Anthony J. Kenyon

Figure 1 for Memristors -- from In-memory computing, Deep Learning Acceleration, Spiking Neural Networks, to the Future of Neuromorphic and Bio-inspired Computing
Figure 2 for Memristors -- from In-memory computing, Deep Learning Acceleration, Spiking Neural Networks, to the Future of Neuromorphic and Bio-inspired Computing
Figure 3 for Memristors -- from In-memory computing, Deep Learning Acceleration, Spiking Neural Networks, to the Future of Neuromorphic and Bio-inspired Computing
Figure 4 for Memristors -- from In-memory computing, Deep Learning Acceleration, Spiking Neural Networks, to the Future of Neuromorphic and Bio-inspired Computing
Viaarxiv icon