Alert button
Picture for Sarthak Gupta

Sarthak Gupta

Alert button

Rapid Training Data Creation by Synthesizing Medical Images for Classification and Localization

Aug 09, 2023
Abhishek Kushwaha, Sarthak Gupta, Anish Bhanushali, Tathagato Rai Dastidar

While the use of artificial intelligence (AI) for medical image analysis is gaining wide acceptance, the expertise, time and cost required to generate annotated data in the medical field are significantly high, due to limited availability of both data and expert annotation. Strongly supervised object localization models require data that is exhaustively annotated, meaning all objects of interest in an image are identified. This is difficult to achieve and verify for medical images. We present a method for the transformation of real data to train any Deep Neural Network to solve the above problems. We show the efficacy of this approach on both a weakly supervised localization model and a strongly supervised localization model. For the weakly supervised model, we show that the localization accuracy increases significantly using the generated data. For the strongly supervised model, this approach overcomes the need for exhaustive annotation on real images. In the latter model, we show that the accuracy, when trained with generated images, closely parallels the accuracy when trained with exhaustively annotated real images. The results are demonstrated on images of human urine samples obtained using microscopy.

* https://openaccess.thecvf.com/content_CVPRW_2020/html/w57/Kushwaha_Rapid_Training_Data_Creation_by_Synthesizing_Medical_Images_for_Classification_CVPRW_2020_paper.html 
Viaarxiv icon

[Re] Double Sampling Randomized Smoothing

Jun 27, 2023
Aryan Gupta, Sarthak Gupta, Abhay Kumar, Harsh Dugar

Figure 1 for [Re] Double Sampling Randomized Smoothing
Figure 2 for [Re] Double Sampling Randomized Smoothing
Figure 3 for [Re] Double Sampling Randomized Smoothing
Figure 4 for [Re] Double Sampling Randomized Smoothing

This paper is a contribution to the reproducibility challenge in the field of machine learning, specifically addressing the issue of certifying the robustness of neural networks (NNs) against adversarial perturbations. The proposed Double Sampling Randomized Smoothing (DSRS) framework overcomes the limitations of existing methods by using an additional smoothing distribution to improve the robustness certification. The paper provides a clear manifestation of DSRS for a generalized family of Gaussian smoothing and a computationally efficient method for implementation. The experiments on MNIST and CIFAR-10 demonstrate the effectiveness of DSRS, consistently certifying larger robust radii compared to other methods. Also various ablations studies are conducted to further analyze the hyperparameters and effect of adversarial training methods on the certified radius by the proposed framework.

Viaarxiv icon

Scalable Optimal Design of Incremental Volt/VAR Control using Deep Neural Networks

Jan 04, 2023
Sarthak Gupta, Ali Mehrizi-Sani, Spyros Chatzivasileiadis, Vassilis Kekatos

Figure 1 for Scalable Optimal Design of Incremental Volt/VAR Control using Deep Neural Networks
Figure 2 for Scalable Optimal Design of Incremental Volt/VAR Control using Deep Neural Networks
Figure 3 for Scalable Optimal Design of Incremental Volt/VAR Control using Deep Neural Networks
Figure 4 for Scalable Optimal Design of Incremental Volt/VAR Control using Deep Neural Networks

Volt/VAR control rules facilitate the autonomous operation of distributed energy resources (DER) to regulate voltage in power distribution grids. According to non-incremental control rules, such as the one mandated by the IEEE Standard 1547, the reactive power setpoint of each DER is computed as a piecewise-linear curve of the local voltage. However, the slopes of such curves are upper-bounded to ensure stability. On the other hand, incremental rules add a memory term into the setpoint update, rendering them universally stable. They can thus attain enhanced steady-state voltage profiles. Optimal rule design (ORD) for incremental rules can be formulated as a bilevel program. We put forth a scalable solution by reformulating ORD as training a deep neural network (DNN). This DNN emulates the Volt/VAR dynamics for incremental rules derived as iterations of proximal gradient descent (PGD). Analytical findings and numerical tests corroborate that the proposed ORD solution can be neatly adapted to single/multi-phase feeders.

Viaarxiv icon

Deep Learning for Optimal Volt/VAR Control using Distributed Energy Resources

Nov 17, 2022
Sarthak Gupta, Spyros Chatzivasileiadis, Vassilis Kekatos

Figure 1 for Deep Learning for Optimal Volt/VAR Control using Distributed Energy Resources
Figure 2 for Deep Learning for Optimal Volt/VAR Control using Distributed Energy Resources
Figure 3 for Deep Learning for Optimal Volt/VAR Control using Distributed Energy Resources
Figure 4 for Deep Learning for Optimal Volt/VAR Control using Distributed Energy Resources

Given their intermittency, distributed energy resources (DERs) have been commissioned with regulating voltages at fast timescales. Although the IEEE 1547 standard specifies the shape of Volt/VAR control rules, it is not clear how to optimally customize them per DER. Optimal rule design (ORD) is a challenging problem as Volt/VAR rules introduce nonlinear dynamics, require bilinear optimization models, and lurk trade-offs between stability and steady-state performance. To tackle ORD, we develop a deep neural network (DNN) that serves as a digital twin of Volt/VAR dynamics. The DNN takes grid conditions as inputs, uses rule parameters as weights, and computes equilibrium voltages as outputs. Thanks to this genuine design, ORD is reformulated as a deep learning task using grid scenarios as training data and aiming at driving the predicted variables being the equilibrium voltages close to unity. The learning task is solved by modifying efficient deep-learning routines to enforce constraints on rule parameters. In the course of DNN-based ORD, we also review and expand on stability conditions and convergence rates for Volt/VAR rules on single-/multi-phase feeders. To benchmark the optimality and runtime of DNN-based ORD, we also devise a novel mixed-integer nonlinear program formulation. Numerical tests showcase the merits of DNN-based ORD.

Viaarxiv icon

Neural Implicit Surface Reconstruction from Noisy Camera Observations

Oct 02, 2022
Sarthak Gupta, Patrik Huber

Figure 1 for Neural Implicit Surface Reconstruction from Noisy Camera Observations
Figure 2 for Neural Implicit Surface Reconstruction from Noisy Camera Observations
Figure 3 for Neural Implicit Surface Reconstruction from Noisy Camera Observations
Figure 4 for Neural Implicit Surface Reconstruction from Noisy Camera Observations

Representing 3D objects and scenes with neural radiance fields has become very popular over the last years. Recently, surface-based representations have been proposed, that allow to reconstruct 3D objects from simple photographs. However, most current techniques require an accurate camera calibration, i.e. camera parameters corresponding to each image, which is often a difficult task to do in real-life situations. To this end, we propose a method for learning 3D surfaces from noisy camera parameters. We show that we can learn camera parameters together with learning the surface representation, and demonstrate good quality 3D surface reconstruction even with noisy camera observations.

* 4 pages - 2 for paper, 2 for supplementary 
Viaarxiv icon

[Re] Distilling Knowledge via Knowledge Review

May 18, 2022
Apoorva Verma, Pranjal Gulati, Sarthak Gupta

Figure 1 for [Re] Distilling Knowledge via Knowledge Review
Figure 2 for [Re] Distilling Knowledge via Knowledge Review
Figure 3 for [Re] Distilling Knowledge via Knowledge Review
Figure 4 for [Re] Distilling Knowledge via Knowledge Review

This effort aims to reproduce the results of experiments and analyze the robustness of the review framework for knowledge distillation introduced in the CVPR '21 paper 'Distilling Knowledge via Knowledge Review' by Chen et al. Previous works in knowledge distillation only studied connections paths between the same levels of the student and the teacher, and cross-level connection paths had not been considered. Chen et al. propose a new residual learning framework to train a single student layer using multiple teacher layers. They also design a novel fusion module to condense feature maps across levels and a loss function to compare feature information stored across different levels to improve performance. In this work, we consistently verify the improvements in test accuracy across student models as reported in the original paper and study the effectiveness of the novel modules introduced by conducting ablation studies and new experiments.

* This is a reproducibility effort based on the CVPR '21 paper 'Distilling Knowledge via Knowledge Review' by Chen et al 
Viaarxiv icon

DNN-based Policies for Stochastic AC OPF

Dec 04, 2021
Sarthak Gupta, Sidhant Misra, Deepjyoti Deka, Vassilis Kekatos

Figure 1 for DNN-based Policies for Stochastic AC OPF
Figure 2 for DNN-based Policies for Stochastic AC OPF
Figure 3 for DNN-based Policies for Stochastic AC OPF
Figure 4 for DNN-based Policies for Stochastic AC OPF

A prominent challenge to the safe and optimal operation of the modern power grid arises due to growing uncertainties in loads and renewables. Stochastic optimal power flow (SOPF) formulations provide a mechanism to handle these uncertainties by computing dispatch decisions and control policies that maintain feasibility under uncertainty. Most SOPF formulations consider simple control policies such as affine policies that are mathematically simple and resemble many policies used in current practice. Motivated by the efficacy of machine learning (ML) algorithms and the potential benefits of general control policies for cost and constraint enforcement, we put forth a deep neural network (DNN)-based policy that predicts the generator dispatch decisions in real time in response to uncertainty. The weights of the DNN are learnt using stochastic primal-dual updates that solve the SOPF without the need for prior generation of training labels and can explicitly account for the feasibility constraints in the SOPF. The advantages of the DNN policy over simpler policies and their efficacy in enforcing safety limits and producing near optimal solutions are demonstrated in the context of a chance constrained formulation on a number of test cases.

Viaarxiv icon

Controlling Smart Inverters using Proxies: A Chance-Constrained DNN-based Approach

May 02, 2021
Sarthak Gupta, Vassilis Kekatos, Ming Jin

Figure 1 for Controlling Smart Inverters using Proxies: A Chance-Constrained DNN-based Approach
Figure 2 for Controlling Smart Inverters using Proxies: A Chance-Constrained DNN-based Approach
Figure 3 for Controlling Smart Inverters using Proxies: A Chance-Constrained DNN-based Approach
Figure 4 for Controlling Smart Inverters using Proxies: A Chance-Constrained DNN-based Approach

Coordinating inverters at scale under uncertainty is the desideratum for integrating renewables in distribution grids. Unless load demands and solar generation are telemetered frequently, controlling inverters given approximate grid conditions or proxies thereof becomes a key specification. Although deep neural networks (DNNs) can learn optimal inverter schedules, guaranteeing feasibility is largely elusive. Rather than training DNNs to imitate already computed optimal power flow (OPF) solutions, this work integrates DNN-based inverter policies into the OPF. The proposed DNNs are trained through two OPF alternatives that confine voltage deviations on the average and as a convex restriction of chance constraints. The trained DNNs can be driven by partial, noisy, or proxy descriptors of the current grid conditions. This is important when OPF has to be solved for an unobservable feeder. DNN weights are trained via back-propagation and upon differentiating the AC power flow equations assuming the network model is known. Otherwise, a gradient-free variant is put forth. The latter is relevant when inverters are controlled by an aggregator having access only to a power flow solver or a digital twin of the feeder. Numerical tests compare the DNN-based inverter control schemes with the optimal inverter setpoints in terms of optimality and feasibility.

* Submitted to IEEE Transactions on Smart Grid 
Viaarxiv icon

Speak2Label: Using Domain Knowledge for Creating a Large Scale Driver Gaze Zone Estimation Dataset

May 13, 2020
Shreya Ghosh, Abhinav Dhall, Garima Sharma, Sarthak Gupta, Nicu Sebe

Figure 1 for Speak2Label: Using Domain Knowledge for Creating a Large Scale Driver Gaze Zone Estimation Dataset
Figure 2 for Speak2Label: Using Domain Knowledge for Creating a Large Scale Driver Gaze Zone Estimation Dataset
Figure 3 for Speak2Label: Using Domain Knowledge for Creating a Large Scale Driver Gaze Zone Estimation Dataset
Figure 4 for Speak2Label: Using Domain Knowledge for Creating a Large Scale Driver Gaze Zone Estimation Dataset

Labelling of human behavior analysis data is a complex and time consuming task. In this paper, a fully automatic technique for labelling an image based gaze behavior dataset for driver gaze zone estimation is proposed. Domain knowledge can be added to the data recording paradigm and later labels can be generated in an automatic manner using speech to text conversion. In order to remove the noise in STT due to different ethnicity, the speech frequency and energy are analysed. The resultant Driver Gaze in the Wild DGW dataset contains 586 recordings, captured during different times of the day including evening. The large scale dataset contains 338 subjects with an age range of 18-63 years. As the data is recorded in different lighting conditions, an illumination robust layer is proposed in the Convolutional Neural Network (CNN). The extensive experiments show the variance in the database resembling real-world conditions and the effectiveness of the proposed CNN pipeline. The proposed network is also fine-tuned for the eye gaze prediction task, which shows the discriminativeness of the representation learnt by our network on the proposed DGW dataset.

Viaarxiv icon