Picture for Emre Neftci

Emre Neftci

QS4D: Quantization-aware training for efficient hardware deployment of structured state-space sequential models

Add code
Jul 08, 2025
Viaarxiv icon

Structured State Space Model Dynamics and Parametrization for Spiking Neural Networks

Add code
Jun 04, 2025
Viaarxiv icon

Contrastive Consolidation of Top-Down Modulations Achieves Sparsely Supervised Continual Learning

Add code
May 20, 2025
Viaarxiv icon

A Grid Cell-Inspired Structured Vector Algebra for Cognitive Maps

Add code
Mar 11, 2025
Viaarxiv icon

A Truly Sparse and General Implementation of Gradient-Based Synaptic Plasticity

Add code
Jan 20, 2025
Figure 1 for A Truly Sparse and General Implementation of Gradient-Based Synaptic Plasticity
Figure 2 for A Truly Sparse and General Implementation of Gradient-Based Synaptic Plasticity
Figure 3 for A Truly Sparse and General Implementation of Gradient-Based Synaptic Plasticity
Figure 4 for A Truly Sparse and General Implementation of Gradient-Based Synaptic Plasticity
Viaarxiv icon

Optimal Gradient Checkpointing for Sparse and Recurrent Architectures using Off-Chip Memory

Add code
Dec 16, 2024
Figure 1 for Optimal Gradient Checkpointing for Sparse and Recurrent Architectures using Off-Chip Memory
Figure 2 for Optimal Gradient Checkpointing for Sparse and Recurrent Architectures using Off-Chip Memory
Figure 3 for Optimal Gradient Checkpointing for Sparse and Recurrent Architectures using Off-Chip Memory
Figure 4 for Optimal Gradient Checkpointing for Sparse and Recurrent Architectures using Off-Chip Memory
Viaarxiv icon

Zero-Shot Temporal Resolution Domain Adaptation for Spiking Neural Networks

Add code
Nov 07, 2024
Figure 1 for Zero-Shot Temporal Resolution Domain Adaptation for Spiking Neural Networks
Figure 2 for Zero-Shot Temporal Resolution Domain Adaptation for Spiking Neural Networks
Figure 3 for Zero-Shot Temporal Resolution Domain Adaptation for Spiking Neural Networks
Figure 4 for Zero-Shot Temporal Resolution Domain Adaptation for Spiking Neural Networks
Viaarxiv icon

On-Chip Learning via Transformer In-Context Learning

Add code
Oct 11, 2024
Figure 1 for On-Chip Learning via Transformer In-Context Learning
Figure 2 for On-Chip Learning via Transformer In-Context Learning
Figure 3 for On-Chip Learning via Transformer In-Context Learning
Figure 4 for On-Chip Learning via Transformer In-Context Learning
Viaarxiv icon

Analog In-Memory Computing Attention Mechanism for Fast and Energy-Efficient Large Language Models

Add code
Sep 28, 2024
Figure 1 for Analog In-Memory Computing Attention Mechanism for Fast and Energy-Efficient Large Language Models
Figure 2 for Analog In-Memory Computing Attention Mechanism for Fast and Energy-Efficient Large Language Models
Figure 3 for Analog In-Memory Computing Attention Mechanism for Fast and Energy-Efficient Large Language Models
Figure 4 for Analog In-Memory Computing Attention Mechanism for Fast and Energy-Efficient Large Language Models
Viaarxiv icon

SNNAX -- Spiking Neural Networks in JAX

Add code
Sep 04, 2024
Figure 1 for SNNAX -- Spiking Neural Networks in JAX
Figure 2 for SNNAX -- Spiking Neural Networks in JAX
Viaarxiv icon