Centre for Nanotechnology, Indian Institute of Technology Roorkee
Abstract:Attention is the brain's ability to selectively focus on a few specific aspects while ignoring irrelevant ones. This biological principle inspired the attention mechanism in modern Transformers. Transformers now underpin large language models (LLMs) such as GPT, but at the cost of massive training and inference energy, leading to a large carbon footprint. While brain attention emerges from neural circuits, Transformer attention relies on dot-product similarity to weight elements in the input sequence. Neuromorphic computing, especially spiking neural networks (SNNs), offers a brain-inspired path to energy-efficient intelligence. Despite recent work on attention-based spiking Transformers, the core attention layer remains non-neuromorphic. Current spiking attention (i) relies on dot-product or element-wise similarity suited to floating-point operations, not event-driven spikes; (ii) keeps attention matrices that suffer from the von Neumann bottleneck, limiting in-memory computing; and (iii) still diverges from brain-like computation. To address these issues, we propose the Spiking STDP Transformer (S$^{2}$TDPT), a neuromorphic Transformer that implements self-attention through spike-timing-dependent plasticity (STDP), embedding query--key correlations in synaptic weights. STDP, a core mechanism of memory and learning in the brain and widely studied in neuromorphic devices, naturally enables in-memory computing and supports non-von Neumann hardware. On CIFAR-10 and CIFAR-100, our model achieves 94.35\% and 78.08\% accuracy with only four timesteps and 0.49 mJ on CIFAR-100, an 88.47\% energy reduction compared to a standard ANN Transformer. Grad-CAM shows that the model attends to semantically relevant regions, enhancing interpretability. Overall, S$^{2}$TDPT illustrates how biologically inspired attention can yield energy-efficient, hardware-friendly, and explainable neuromorphic models.




Abstract:Speech recognition is a key challenge in natural language processing, requiring low latency, efficient computation, and strong generalization for real-time applications. While software-based artificial neural networks (ANNs) excel at this task, they are computationally intensive and depend heavily on data pre-processing. Neuromorphic computing, with its low-latency and energy-efficient advantages, holds promise for audio classification. Memristive nanowire networks, combined with pre-processing techniques like Mel-Frequency Cepstrum Coefficient extraction, have been widely used for associative learning, but such pre-processing can be power-intensive, undermining latency benefits. This study pioneers the use of memristive and spatio-temporal properties of nanowire networks for audio signal classification without pre-processing. A nanowire network simulation is paired with three linear classifiers for 10-class MNIST audio classification and binary speaker generalization tests. The hybrid system achieves significant benefits: excellent data compression with only 3% of nanowire output utilized, a 10-fold reduction in computational latency, and up to 28.5% improved classification accuracy (using a logistic regression classifier). Precision and recall improve by 10% and 17% for multispeaker datasets, and by 24% and 17% for individual speaker datasets, compared to raw data classifiers. This work provides a foundational proof of concept for utilizing memristive nanowire networks (NWN) in edge-computing devices, showcasing their potential for efficient, real-time audio signal processing with reduced computational overhead and power consumption, and enabling the development of advanced neuromorphic computing solutions.
Abstract:This paper presents NT-Java-1.1B, an open-source specialized code language model built on StarCoderBase-1.1B, designed for coding tasks in Java programming. NT-Java-1.1B achieves state-of-the-art performance, surpassing its base model and majority of other models of similar size on MultiPL-E Java code benchmark. While there have been studies on extending large, generic pre-trained models to improve proficiency in specific programming languages like Python, similar investigations on small code models for other programming languages are lacking. Large code models require specialized hardware like GPUs for inference, highlighting the need for research into building small code models that can be deployed on developer desktops. This paper addresses this research gap by focusing on the development of a small Java code model, NT-Java-1.1B, and its quantized versions, which performs comparably to open models around 1.1B on MultiPL-E Java code benchmarks, making them ideal for desktop deployment. This paper establishes the foundation for specialized models across languages and sizes for a family of NT Models.