Abstract:Uncertainty in biological neural systems appears to be computationally beneficial rather than detrimental. However, in neuromorphic computing systems, device variability often limits performance, including accuracy and efficiency. In this work, we propose a spiking Bayesian neural network (SBNN) framework that unifies the dynamic models of intrinsic device stochasticity (based on Magnetic Tunnel Junctions) and stochastic threshold neurons to leverage noise as a functional Bayesian resource. Experiments demonstrate that SBNN achieves high accuracy (99.16% on MNIST, 94.84% on CIFAR10) with 8-bit precision. Meanwhile rate estimation method provides a ~20-fold training speedup. Furthermore, SBNN exhibits superior robustness, showing a 67% accuracy improvement under synaptic weight noise and 12% under input noise compared to standard spiking neural networks. Crucially, hardware validation confirms that physical device implementation causes invisible accuracy and calibration loss compared to the algorithmic model. Converting device stochasticity into neuronal uncertainty offers a route to compact, energy-efficient neuromorphic computing under uncertainty.
Abstract:Working memory enables the brain to integrate transient information for rapid decision-making. Artificial networks typically replicate this via recurrent or parallel architectures, yet incur high energy costs and noise sensitivity. Here we report IPNet, a hardware-software co-designed neuromorphic architecture realizing human-like working memory via neuronal intrinsic plasticity. Exploiting Joule-heating dynamics of Magnetic Tunnel Junctions (MTJs), IPNet physically emulates biological memory volatility. The memory behavior of the proposed architecture shows similar trends in n-back, free recall and memory interference tasks to that of reported human subjects. Implemented exclusively with MTJ neurons, the architecture with human-like working memory achieves 99.65% accuracy on 11-class DVS gesture datasets and maintains 99.48% on a novel 22-class time-reversed benchmark, outperforming RNN, LSTM, and 2+1D CNN baselines sharing identical backbones. For autonomous driving (DDD-20), IPNet reduces steering prediction error by 14.4% compared to ResNet-LSTM. Architecturally, we identify a 'Memory-at-the-Frontier' effect where performance is maximized at the sensing interface, validating a bio-plausible near-sensor processing paradigm. Crucially, all results rely on raw parameters from fabricated devices without optimization. Hardware-in-the-loop validation confirms the system's physical realizability. Separately, energy analysis reveals a reduction in memory power of 2,874x compared to LSTMs and 90,920x versus parallel 3D-CNNs. This capacitor-free design enables a compact ~1.5um2 footprint (28 nm CMOS): a >20-fold reduction over standard LIF neurons. Ultimately, we demonstrate that instantiating human-like working memory via intrinsic neuronal plasticity endows neural networks with the dual biological advantages of superior dynamic vision processing and minimal metabolic cost.