Alert button
Picture for Saibal Mukhopadhyay

Saibal Mukhopadhyay

Alert button

A Quantum Hopfield Associative Memory Implemented on an Actual Quantum Processor

May 25, 2021
Nathan Eli Miller, Saibal Mukhopadhyay

Figure 1 for A Quantum Hopfield Associative Memory Implemented on an Actual Quantum Processor
Figure 2 for A Quantum Hopfield Associative Memory Implemented on an Actual Quantum Processor
Figure 3 for A Quantum Hopfield Associative Memory Implemented on an Actual Quantum Processor
Figure 4 for A Quantum Hopfield Associative Memory Implemented on an Actual Quantum Processor

In this work, we present a Quantum Hopfield Associative Memory (QHAM) and demonstrate its capabilities in simulation and hardware using IBM Quantum Experience. The QHAM is based on a quantum neuron design which can be utilized for many different machine learning applications and can be implemented on real quantum hardware without requiring mid-circuit measurement or reset operations. We analyze the accuracy of the neuron and the full QHAM considering hardware errors via simulation with hardware noise models as well as with implementation on the 15-qubit ibmq_16_melbourne device. The quantum neuron and the QHAM are shown to be resilient to noise and require low qubit and time overhead. We benchmark the QHAM by testing its effective memory capacity against qubit- and circuit-level errors and demonstrate its capabilities in the NISQ-era of quantum hardware. This demonstration of the first functional QHAM to be implemented in NISQ-era quantum hardware is a significant step in machine learning at the leading edge of quantum computing.

* 17 pages, 10 figures, 3 tables 
Viaarxiv icon

A Fully Spiking Hybrid Neural Network for Energy-Efficient Object Detection

Apr 21, 2021
Biswadeep Chakraborty, Xueyuan She, Saibal Mukhopadhyay

Figure 1 for A Fully Spiking Hybrid Neural Network for Energy-Efficient Object Detection
Figure 2 for A Fully Spiking Hybrid Neural Network for Energy-Efficient Object Detection
Figure 3 for A Fully Spiking Hybrid Neural Network for Energy-Efficient Object Detection
Figure 4 for A Fully Spiking Hybrid Neural Network for Energy-Efficient Object Detection

This paper proposes a Fully Spiking Hybrid Neural Network (FSHNN) for energy-efficient and robust object detection in resource-constrained platforms. The network architecture is based on Convolutional SNN using leaky-integrate-fire neuron models. The model combines unsupervised Spike Time-Dependent Plasticity (STDP) learning with back-propagation (STBP) learning methods and also uses Monte Carlo Dropout to get an estimate of the uncertainty error. FSHNN provides better accuracy compared to DNN based object detectors while being 150X energy-efficient. It also outperforms these object detectors, when subjected to noisy input data and less labeled training data with a lower uncertainty error.

* 10 pages, Submitted Manuscript 
Viaarxiv icon

Towards Improving the Trustworthiness of Hardware based Malware Detector using Online Uncertainty Estimation

Mar 21, 2021
Harshit Kumar, Nikhil Chawla, Saibal Mukhopadhyay

Figure 1 for Towards Improving the Trustworthiness of Hardware based Malware Detector using Online Uncertainty Estimation
Figure 2 for Towards Improving the Trustworthiness of Hardware based Malware Detector using Online Uncertainty Estimation
Figure 3 for Towards Improving the Trustworthiness of Hardware based Malware Detector using Online Uncertainty Estimation
Figure 4 for Towards Improving the Trustworthiness of Hardware based Malware Detector using Online Uncertainty Estimation

Hardware-based Malware Detectors (HMDs) using Machine Learning (ML) models have shown promise in detecting malicious workloads. However, the conventional black-box based machine learning (ML) approach used in these HMDs fail to address the uncertain predictions, including those made on zero-day malware. The ML models used in HMDs are agnostic to the uncertainty that determines whether the model "knows what it knows," severely undermining its trustworthiness. We propose an ensemble-based approach that quantifies uncertainty in predictions made by ML models of an HMD, when it encounters an unknown workload than the ones it was trained on. We test our approach on two different HMDs that have been proposed in the literature. We show that the proposed uncertainty estimator can detect >90% of unknown workloads for the Power-management based HMD, and conclude that the overlapping benign and malware classes undermine the trustworthiness of the Performance Counter-based HMD.

Viaarxiv icon

A Deep Learning-based Collocation Method for Modeling Unknown PDEs from Sparse Observation

Nov 30, 2020
Priyabrata Saha, Saibal Mukhopadhyay

Figure 1 for A Deep Learning-based Collocation Method for Modeling Unknown PDEs from Sparse Observation
Figure 2 for A Deep Learning-based Collocation Method for Modeling Unknown PDEs from Sparse Observation
Figure 3 for A Deep Learning-based Collocation Method for Modeling Unknown PDEs from Sparse Observation
Figure 4 for A Deep Learning-based Collocation Method for Modeling Unknown PDEs from Sparse Observation

Deep learning-based modeling of dynamical systems driven by partial differential equations (PDEs) has become quite popular in recent years. However, most of the existing deep learning-based methods either assume strong physics prior, or depend on specific initial and boundary conditions, or require data in dense regular grid making them inapt for modeling unknown PDEs from sparsely-observed data. This paper presents a deep learning-based collocation method for modeling dynamical systems driven by unknown PDEs when data sites are sparsely distributed. The proposed method is spatial dimension-independent, geometrically flexible, learns from sparsely-available data and the learned model does not depend on any specific initial and boundary conditions. We demonstrate our method in the forecasting task for two-dimensional wave equation and Burgers-Fisher equation in multiple geometries with different boundary conditions.

* 14 pages, 6 figures 
Viaarxiv icon

Neural Identification for Control

Oct 20, 2020
Priyabrata Saha, Magnus Egerstedt, Saibal Mukhopadhyay

Figure 1 for Neural Identification for Control
Figure 2 for Neural Identification for Control
Figure 3 for Neural Identification for Control
Figure 4 for Neural Identification for Control

We present a new method for learning control law that stabilizes an unknown nonlinear dynamical system at an equilibrium point. We formulate a system identification task in a self-supervised learning setting that jointly learns a controller and corresponding stable closed-loop dynamics hypothesis. The input-output behavior of the unknown dynamical system under random control inputs is used as the supervising signal to train the neural network-based system model and the controller. The method relies on the Lyapunov stability theory to generate a stable closed-loop dynamics hypothesis and corresponding control law. We demonstrate our method on various nonlinear control problems such as n-Link pendulum balancing, pendulum on cart balancing, and wheeled vehicle path following.

* 7 pages, 7 figures 
Viaarxiv icon

PhICNet: Physics-Incorporated Convolutional Recurrent Neural Networks for Modeling Dynamical Systems

Apr 14, 2020
Priyabrata Saha, Saurabh Dash, Saibal Mukhopadhyay

Figure 1 for PhICNet: Physics-Incorporated Convolutional Recurrent Neural Networks for Modeling Dynamical Systems
Figure 2 for PhICNet: Physics-Incorporated Convolutional Recurrent Neural Networks for Modeling Dynamical Systems
Figure 3 for PhICNet: Physics-Incorporated Convolutional Recurrent Neural Networks for Modeling Dynamical Systems
Figure 4 for PhICNet: Physics-Incorporated Convolutional Recurrent Neural Networks for Modeling Dynamical Systems

Dynamical systems involving partial differential equations (PDEs) and ordinary differential equations (ODEs) arise in many fields of science and engineering. In this paper, we present a physics-incorporated deep learning framework to model and predict the spatiotemporal evolution of dynamical systems governed by partially-known inhomogenous PDEs with unobservable source dynamics. We formulate our model PhICNet as a convolutional recurrent neural network which is end-to-end trainable for spatiotemporal evolution prediction of dynamical systems. Experimental results show the long-term prediction capability of our model.

Viaarxiv icon

MagNet: Discovering Multi-agent Interaction Dynamics using Neural Network

Mar 03, 2020
Priyabrata Saha, Arslan Ali, Burhan A. Mudassar, Yun Long, Saibal Mukhopadhyay

Figure 1 for MagNet: Discovering Multi-agent Interaction Dynamics using Neural Network
Figure 2 for MagNet: Discovering Multi-agent Interaction Dynamics using Neural Network
Figure 3 for MagNet: Discovering Multi-agent Interaction Dynamics using Neural Network
Figure 4 for MagNet: Discovering Multi-agent Interaction Dynamics using Neural Network

We present the MagNet, a neural network-based multi-agent interaction model to discover the governing dynamics and predict evolution of a complex multi-agent system from observations. We formulate a multi-agent system as a coupled non-linear network with a generic ordinary differential equation (ODE) based state evolution, and develop a neural network-based realization of its time-discretized model. MagNet is trained to discover the core dynamics of a multi-agent system from observations, and tuned on-line to learn agent-specific parameters of the dynamics to ensure accurate prediction even when physical or relational attributes of agents, or number of agents change. We evaluate MagNet on a point-mass system in two-dimensional space, Kuramoto phase synchronization dynamics and predator-swarm interaction dynamics demonstrating orders of magnitude improvement in prediction accuracy over traditional deep learning models.

* Accepted manuscript by ICRA 2020 
Viaarxiv icon

Improving Robustness of ReRAM-based Spiking Neural Network Accelerator with Stochastic Spike-timing-dependent-plasticity

Sep 11, 2019
Xueyuan She, Yun Long, Saibal Mukhopadhyay

Figure 1 for Improving Robustness of ReRAM-based Spiking Neural Network Accelerator with Stochastic Spike-timing-dependent-plasticity
Figure 2 for Improving Robustness of ReRAM-based Spiking Neural Network Accelerator with Stochastic Spike-timing-dependent-plasticity
Figure 3 for Improving Robustness of ReRAM-based Spiking Neural Network Accelerator with Stochastic Spike-timing-dependent-plasticity
Figure 4 for Improving Robustness of ReRAM-based Spiking Neural Network Accelerator with Stochastic Spike-timing-dependent-plasticity

Spike-timing-dependent-plasticity (STDP) is an unsupervised learning algorithm for spiking neural network (SNN), which promises to achieve deeper understanding of human brain and more powerful artificial intelligence. While conventional computing system fails to simulate SNN efficiently, process-in-memory (PIM) based on devices such as ReRAM can be used in designing fast and efficient STDP based SNN accelerators, as it operates in high resemblance with biological neural network. However, the real-life implementation of such design still suffers from impact of input noise and device variation. In this work, we present a novel stochastic STDP algorithm that uses spiking frequency information to dynamically adjust synaptic behavior. The algorithm is tested in pattern recognition task with noisy input and shows accuracy improvement over deterministic STDP. In addition, we show that the new algorithm can be used for designing a robust ReRAM based SNN accelerator that has strong resilience to device variation.

Viaarxiv icon