Abstract:Recent advances in artificial intelligence, coupled with increasing data bandwidth requirements, in applications such as video processing and high-resolution sensing, have created a growing demand for high computational performance under stringent energy constraints, especially for battery-powered and edge devices. To address this, we present a mixed-signal adiabatic capacitive neural network chip, designed in a 130$nm$ CMOS technology, to demonstrate significant energy savings coupled with high image classification accuracy. Our dual-layer hardware chip, incorporating 16 single-cycle multiply-accumulate engines, can reliably distinguish between 4 classes of 8x8 1-bit images, with classification results over 95\%, within 2.7\% of an equivalent software version. Energy measurements reveal average energy savings between 2.1x and 6.8x, compared to an equivalent CMOS capacitive implementation.
Abstract:This paper introduces a new, highly energy-efficient, Adiabatic Capacitive Neuron (ACN) hardware implementation of an Artificial Neuron (AN) with improved functionality, accuracy, robustness and scalability over previous work. The paper describes the implementation of a \mbox{12-bit} single neuron, with positive and negative weight support, in an $\mathbf{0.18\mu m}$ CMOS technology. The paper also presents a new Threshold Logic (TL) design for a binary AN activation function that generates a low symmetrical offset across three process corners and five temperatures between $-55^o$C and $125^o$C. Post-layout simulations demonstrate a maximum rising and falling offset voltage of 9$mV$ compared to conventional TL, which has rising and falling offset voltages of 27$mV$ and 5$mV$ respectively, across temperature and process. Moreover, the proposed TL design shows a decrease in average energy of 1.5$\%$ at the SS corner and 2.3$\%$ at FF corner compared to the conventional TL design. The total synapse energy saving for the proposed ACN was above 90$\%$ (over 12x improvement) when compared to a non-adiabatic CMOS Capacitive Neuron (CCN) benchmark for a frequency ranging from 500$kHz$ to 100$MHz$. A 1000-sample Monte Carlo simulation including process variation and mismatch confirms the worst-case energy savings of $\>$90$\%$ compared to CCN in the synapse energy profile. Finally, the impact of supply voltage scaling shows consistent energy savings of above 90$\%$ (except all zero inputs) without loss of functionality.