Alert button
Picture for Abu Sebastian

Abu Sebastian

Alert button

IBM Research - Zurich

A Precision-Optimized Fixed-Point Near-Memory Digital Processing Unit for Analog In-Memory Computing

Add code
Bookmark button
Alert button
Feb 12, 2024
Elena Ferro, Athanasios Vasilopoulos, Corey Lammie, Manuel Le Gallo, Luca Benini, Irem Boybat, Abu Sebastian

Viaarxiv icon

Zero-shot Classification using Hyperdimensional Computing

Add code
Bookmark button
Alert button
Jan 30, 2024
Samuele Ruffino, Geethan Karunaratne, Michael Hersche, Luca Benini, Abu Sebastian, Abbas Rahimi

Viaarxiv icon

Probabilistic Abduction for Visual Abstract Reasoning via Learning Rules in Vector-symbolic Architectures

Add code
Bookmark button
Alert button
Jan 29, 2024
Michael Hersche, Francesco di Stefano, Thomas Hofmann, Abu Sebastian, Abbas Rahimi

Viaarxiv icon

TCNCA: Temporal Convolution Network with Chunked Attention for Scalable Sequence Processing

Add code
Bookmark button
Alert button
Dec 09, 2023
Aleksandar Terzic, Michael Hersche, Geethan Karunaratne, Luca Benini, Abu Sebastian, Abbas Rahimi

Viaarxiv icon

MIMONets: Multiple-Input-Multiple-Output Neural Networks Exploiting Computation in Superposition

Add code
Bookmark button
Alert button
Dec 05, 2023
Nicolas Menet, Michael Hersche, Geethan Karunaratne, Luca Benini, Abu Sebastian, Abbas Rahimi

Viaarxiv icon

Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference

Add code
Bookmark button
Alert button
Jul 18, 2023
Manuel Le Gallo, Corey Lammie, Julian Buechel, Fabio Carta, Omobayode Fagbohungbe, Charles Mackin, Hsinyu Tsai, Vijay Narayanan, Abu Sebastian, Kaoutar El Maghraoui, Malte J. Rasch

Figure 1 for Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference
Figure 2 for Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference
Figure 3 for Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference
Figure 4 for Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference
Viaarxiv icon

AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing

Add code
Bookmark button
Alert button
May 17, 2023
Hadjer Benmeziane, Corey Lammie, Irem Boybat, Malte Rasch, Manuel Le Gallo, Hsinyu Tsai, Ramachandran Muralidhar, Smail Niar, Ouarnoughi Hamza, Vijay Narayanan, Abu Sebastian, Kaoutar El Maghraoui

Figure 1 for AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing
Figure 2 for AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing
Figure 3 for AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing
Figure 4 for AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing
Viaarxiv icon

Factorizers for Distributed Sparse Block Codes

Add code
Bookmark button
Alert button
Mar 24, 2023
Michael Hersche, Aleksandar Terzic, Geethan Karunaratne, Jovin Langenegger, Angéline Pouget, Giovanni Cherubini, Luca Benini, Abu Sebastian, Abbas Rahimi

Figure 1 for Factorizers for Distributed Sparse Block Codes
Figure 2 for Factorizers for Distributed Sparse Block Codes
Figure 3 for Factorizers for Distributed Sparse Block Codes
Figure 4 for Factorizers for Distributed Sparse Block Codes
Viaarxiv icon

Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators

Add code
Bookmark button
Alert button
Feb 16, 2023
Malte J. Rasch, Charles Mackin, Manuel Le Gallo, An Chen, Andrea Fasoli, Frederic Odermatt, Ning Li, S. R. Nandakumar, Pritish Narayanan, Hsinyu Tsai, Geoffrey W. Burr, Abu Sebastian, Vijay Narayanan

Figure 1 for Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators
Figure 2 for Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators
Figure 3 for Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators
Figure 4 for Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators
Viaarxiv icon

In-memory factorization of holographic perceptual representations

Add code
Bookmark button
Alert button
Nov 09, 2022
Jovin Langenegger, Geethan Karunaratne, Michael Hersche, Luca Benini, Abu Sebastian, Abbas Rahimi

Figure 1 for In-memory factorization of holographic perceptual representations
Figure 2 for In-memory factorization of holographic perceptual representations
Figure 3 for In-memory factorization of holographic perceptual representations
Figure 4 for In-memory factorization of holographic perceptual representations
Viaarxiv icon