Alert button
Picture for Biswadeep Chakraborty

Biswadeep Chakraborty

Alert button

Brain-Inspired Spiking Neural Network for Online Unsupervised Time Series Prediction

Apr 10, 2023
Biswadeep Chakraborty, Saibal Mukhopadhyay

Figure 1 for Brain-Inspired Spiking Neural Network for Online Unsupervised Time Series Prediction
Figure 2 for Brain-Inspired Spiking Neural Network for Online Unsupervised Time Series Prediction
Figure 3 for Brain-Inspired Spiking Neural Network for Online Unsupervised Time Series Prediction
Figure 4 for Brain-Inspired Spiking Neural Network for Online Unsupervised Time Series Prediction

Energy and data-efficient online time series prediction for predicting evolving dynamical systems are critical in several fields, especially edge AI applications that need to update continuously based on streaming data. However, current DNN-based supervised online learning models require a large amount of training data and cannot quickly adapt when the underlying system changes. Moreover, these models require continuous retraining with incoming data making them highly inefficient. To solve these issues, we present a novel Continuous Learning-based Unsupervised Recurrent Spiking Neural Network Model (CLURSNN), trained with spike timing dependent plasticity (STDP). CLURSNN makes online predictions by reconstructing the underlying dynamical system using Random Delay Embedding by measuring the membrane potential of neurons in the recurrent layer of the RSNN with the highest betweenness centrality. We also use topological data analysis to propose a novel methodology using the Wasserstein Distance between the persistence homologies of the predicted and observed time series as a loss function. We show that the proposed online time series prediction methodology outperforms state-of-the-art DNN models when predicting an evolving Lorenz63 dynamical system.

* Manuscript accepted to be published in IJCNN 2023 
Viaarxiv icon

Unsupervised 3D Object Learning through Neuron Activity aware Plasticity

Feb 22, 2023
Beomseok Kang, Biswadeep Chakraborty, Saibal Mukhopadhyay

Figure 1 for Unsupervised 3D Object Learning through Neuron Activity aware Plasticity
Figure 2 for Unsupervised 3D Object Learning through Neuron Activity aware Plasticity
Figure 3 for Unsupervised 3D Object Learning through Neuron Activity aware Plasticity
Figure 4 for Unsupervised 3D Object Learning through Neuron Activity aware Plasticity

We present an unsupervised deep learning model for 3D object classification. Conventional Hebbian learning, a well-known unsupervised model, suffers from loss of local features leading to reduced performance for tasks with complex geometric objects. We present a deep network with a novel Neuron Activity Aware (NeAW) Hebbian learning rule that dynamically switches the neurons to be governed by Hebbian learning or anti-Hebbian learning, depending on its activity. We analytically show that NeAW Hebbian learning relieves the bias in neuron activity, allowing more neurons to attend to the representation of the 3D objects. Empirical results show that the NeAW Hebbian learning outperforms other variants of Hebbian learning and shows higher accuracy over fully supervised models when training data is limited.

* Published as a conference paper at ICLR 2023 
Viaarxiv icon

Heterogeneous Neuronal and Synaptic Dynamics for Spike-Efficient Unsupervised Learning: Theory and Design Principles

Feb 22, 2023
Biswadeep Chakraborty, Saibal Mukhopadhyay

Figure 1 for Heterogeneous Neuronal and Synaptic Dynamics for Spike-Efficient Unsupervised Learning: Theory and Design Principles
Figure 2 for Heterogeneous Neuronal and Synaptic Dynamics for Spike-Efficient Unsupervised Learning: Theory and Design Principles
Figure 3 for Heterogeneous Neuronal and Synaptic Dynamics for Spike-Efficient Unsupervised Learning: Theory and Design Principles
Figure 4 for Heterogeneous Neuronal and Synaptic Dynamics for Spike-Efficient Unsupervised Learning: Theory and Design Principles

This paper shows that the heterogeneity in neuronal and synaptic dynamics reduces the spiking activity of a Recurrent Spiking Neural Network (RSNN) while improving prediction performance, enabling spike-efficient (unsupervised) learning. We analytically show that the diversity in neurons' integration/relaxation dynamics improves an RSNN's ability to learn more distinct input patterns (higher memory capacity), leading to improved classification and prediction performance. We further prove that heterogeneous Spike-Timing-Dependent-Plasticity (STDP) dynamics of synapses reduce spiking activity but preserve memory capacity. The analytical results motivate Heterogeneous RSNN design using Bayesian optimization to determine heterogeneity in neurons and synapses to improve $\mathcal{E}$, defined as the ratio of spiking activity and memory capacity. The empirical results on time series classification and prediction tasks show that optimized HRSNN increases performance and reduces spiking activity compared to a homogeneous RSNN.

* The Eleventh International Conference on Learning Representations 2023  
* Paper Published in ICLR 2023 (https://openreview.net/forum?id=QIRtAqoXwj) 
Viaarxiv icon

$μ$DARTS: Model Uncertainty-Aware Differentiable Architecture Search

Jul 24, 2021
Biswadeep Chakraborty, Saibal Mukhopadhyay

Figure 1 for $μ$DARTS: Model Uncertainty-Aware Differentiable Architecture Search
Figure 2 for $μ$DARTS: Model Uncertainty-Aware Differentiable Architecture Search
Figure 3 for $μ$DARTS: Model Uncertainty-Aware Differentiable Architecture Search
Figure 4 for $μ$DARTS: Model Uncertainty-Aware Differentiable Architecture Search

We present a Model Uncertainty-aware Differentiable ARchiTecture Search ($\mu$DARTS) that optimizes neural networks to simultaneously achieve high accuracy and low uncertainty. We introduce concrete dropout within DARTS cells and include a Monte-Carlo regularizer within the training loss to optimize the concrete dropout probabilities. A predictive variance term is introduced in the validation loss to enable searching for architecture with minimal model uncertainty. The experiments on CIFAR10, CIFAR100, SVHN, and ImageNet verify the effectiveness of $\mu$DARTS in improving accuracy and reducing uncertainty compared to existing DARTS methods. Moreover, the final architecture obtained from $\mu$DARTS shows higher robustness to noise at the input image and model parameters compared to the architecture obtained from existing DARTS methods.

* 10 pages, 7 Tables, 6 Figures, Submitted in TNNLS 
Viaarxiv icon

Characterization of Generalizability of Spike Time Dependent Plasticity trained Spiking Neural Networks

May 31, 2021
Biswadeep Chakraborty, Saibal Mukhopadhyay

Figure 1 for Characterization of Generalizability of Spike Time Dependent Plasticity trained Spiking Neural Networks
Figure 2 for Characterization of Generalizability of Spike Time Dependent Plasticity trained Spiking Neural Networks
Figure 3 for Characterization of Generalizability of Spike Time Dependent Plasticity trained Spiking Neural Networks
Figure 4 for Characterization of Generalizability of Spike Time Dependent Plasticity trained Spiking Neural Networks

A Spiking Neural Network (SNN) trained with Spike Time Dependent Plasticity (STDP) is a neuro-inspired unsupervised learning method for various machine learning applications. This paper studies the generalizability properties of the STDP learning processes using the Hausdorff dimension of the trajectories of the learning algorithm. The paper analyzes the effects of STDP learning models and associated hyper-parameters on the generalizability properties of an SNN and characterizes the generalizability vs learnability trade-off in an SNN. The analysis is used to develop a Bayesian optimization approach to optimize the hyper-parameters for an STDP model to improve the generalizability properties of an SNN.

* 15 pages, submitted to Frontiers in Neuroscience. arXiv admin note: text overlap with arXiv:2010.08195, arXiv:2006.09313 by other authors 
Viaarxiv icon

A Fully Spiking Hybrid Neural Network for Energy-Efficient Object Detection

Apr 21, 2021
Biswadeep Chakraborty, Xueyuan She, Saibal Mukhopadhyay

Figure 1 for A Fully Spiking Hybrid Neural Network for Energy-Efficient Object Detection
Figure 2 for A Fully Spiking Hybrid Neural Network for Energy-Efficient Object Detection
Figure 3 for A Fully Spiking Hybrid Neural Network for Energy-Efficient Object Detection
Figure 4 for A Fully Spiking Hybrid Neural Network for Energy-Efficient Object Detection

This paper proposes a Fully Spiking Hybrid Neural Network (FSHNN) for energy-efficient and robust object detection in resource-constrained platforms. The network architecture is based on Convolutional SNN using leaky-integrate-fire neuron models. The model combines unsupervised Spike Time-Dependent Plasticity (STDP) learning with back-propagation (STBP) learning methods and also uses Monte Carlo Dropout to get an estimate of the uncertainty error. FSHNN provides better accuracy compared to DNN based object detectors while being 150X energy-efficient. It also outperforms these object detectors, when subjected to noisy input data and less labeled training data with a lower uncertainty error.

* 10 pages, Submitted Manuscript 
Viaarxiv icon

Cost-aware Feature Selection for IoT Device Classification

Sep 02, 2020
Biswadeep Chakraborty, Dinil Mon Divakaran, Ido Nevat, Gareth W. Peters, Mohan Gurusamy

Figure 1 for Cost-aware Feature Selection for IoT Device Classification
Figure 2 for Cost-aware Feature Selection for IoT Device Classification
Figure 3 for Cost-aware Feature Selection for IoT Device Classification
Figure 4 for Cost-aware Feature Selection for IoT Device Classification

Classification of IoT devices into different types is of paramount importance, from multiple perspectives, including security and privacy aspects. Recent works have explored machine learning techniques for fingerprinting (or classifying) IoT devices, with promising results. However, existing works have assumed that the features used for building the machine learning models are readily available or can be easily extracted from the network traffic; in other words, they do not consider the costs associated with feature extraction. In this work, we take a more realistic approach, and argue that feature extraction has a cost, and the costs are different for different features. We also take a step forward from the current practice of considering the misclassification loss as a binary value, and make a case for different losses based on the misclassification performance. Thereby, and more importantly, we introduce the notion of risk for IoT device classification. We define and formulate the problem of cost-aware IoT device classification. This being a combinatorial optimization problem, we develop a novel algorithm to solve it in a fast and effective way using the Cross-Entropy (CE) based stochastic optimization technique. Using traffic of real devices, we demonstrate the capability of the CE based algorithm in selecting features with minimal risk of misclassification while keeping the cost for feature extraction within a specified limit.

* 32 Pages, 8 figures 
Viaarxiv icon