Alert button
Picture for Vinod K Kurmi

Vinod K Kurmi

Alert button

Gradient Based Activations for Accurate Bias-Free Learning

Feb 17, 2022
Vinod K Kurmi, Rishabh Sharma, Yash Vardhan Sharma, Vinay P. Namboodiri

Figure 1 for Gradient Based Activations for Accurate Bias-Free Learning
Figure 2 for Gradient Based Activations for Accurate Bias-Free Learning
Figure 3 for Gradient Based Activations for Accurate Bias-Free Learning
Figure 4 for Gradient Based Activations for Accurate Bias-Free Learning

Bias mitigation in machine learning models is imperative, yet challenging. While several approaches have been proposed, one view towards mitigating bias is through adversarial learning. A discriminator is used to identify the bias attributes such as gender, age or race in question. This discriminator is used adversarially to ensure that it cannot distinguish the bias attributes. The main drawback in such a model is that it directly introduces a trade-off with accuracy as the features that the discriminator deems to be sensitive for discrimination of bias could be correlated with classification. In this work we solve the problem. We show that a biased discriminator can actually be used to improve this bias-accuracy tradeoff. Specifically, this is achieved by using a feature masking approach using the discriminator's gradients. We ensure that the features favoured for the bias discrimination are de-emphasized and the unbiased features are enhanced during classification. We show that this simple approach works well to reduce bias as well as improve accuracy significantly. We evaluate the proposed model on standard benchmarks. We improve the accuracy of the adversarial methods while maintaining or even improving the unbiasness and also outperform several other recent methods.

* AAAI 2022(Accepted) 
Viaarxiv icon

Exploring Dropout Discriminator for Domain Adaptation

Jul 09, 2021
Vinod K Kurmi, Venkatesh K Subramanian, Vinay P. Namboodiri

Figure 1 for Exploring Dropout Discriminator for Domain Adaptation
Figure 2 for Exploring Dropout Discriminator for Domain Adaptation
Figure 3 for Exploring Dropout Discriminator for Domain Adaptation
Figure 4 for Exploring Dropout Discriminator for Domain Adaptation

Adaptation of a classifier to new domains is one of the challenging problems in machine learning. This has been addressed using many deep and non-deep learning based methods. Among the methodologies used, that of adversarial learning is widely applied to solve many deep learning problems along with domain adaptation. These methods are based on a discriminator that ensures source and target distributions are close. However, here we suggest that rather than using a point estimate obtaining by a single discriminator, it would be useful if a distribution based on ensembles of discriminators could be used to bridge this gap. This could be achieved using multiple classifiers or using traditional ensemble methods. In contrast, we suggest that a Monte Carlo dropout based ensemble discriminator could suffice to obtain the distribution based discriminator. Specifically, we propose a curriculum based dropout discriminator that gradually increases the variance of the sample based distribution and the corresponding reverse gradients are used to align the source and target feature representations. An ensemble of discriminators helps the model to learn the data distribution efficiently. It also provides a better gradient estimates to train the feature extractor. The detailed results and thorough ablation analysis show that our model outperforms state-of-the-art results.

* This work is an extension of our BMVC-2019 paper (arXiv:1907.10628) 
Viaarxiv icon

Sensor-invariant Fingerprint ROI Segmentation Using Recurrent Adversarial Learning

Jul 03, 2021
Indu Joshi, Ayush Utkarsh, Riya Kothari, Vinod K Kurmi, Antitza Dantcheva, Sumantra Dutta Roy, Prem Kumar Kalra

Figure 1 for Sensor-invariant Fingerprint ROI Segmentation Using Recurrent Adversarial Learning
Figure 2 for Sensor-invariant Fingerprint ROI Segmentation Using Recurrent Adversarial Learning
Figure 3 for Sensor-invariant Fingerprint ROI Segmentation Using Recurrent Adversarial Learning
Figure 4 for Sensor-invariant Fingerprint ROI Segmentation Using Recurrent Adversarial Learning

A fingerprint region of interest (roi) segmentation algorithm is designed to separate the foreground fingerprint from the background noise. All the learning based state-of-the-art fingerprint roi segmentation algorithms proposed in the literature are benchmarked on scenarios when both training and testing databases consist of fingerprint images acquired from the same sensors. However, when testing is conducted on a different sensor, the segmentation performance obtained is often unsatisfactory. As a result, every time a new fingerprint sensor is used for testing, the fingerprint roi segmentation model needs to be re-trained with the fingerprint image acquired from the new sensor and its corresponding manually marked ROI. Manually marking fingerprint ROI is expensive because firstly, it is time consuming and more importantly, requires domain expertise. In order to save the human effort in generating annotations required by state-of-the-art, we propose a fingerprint roi segmentation model which aligns the features of fingerprint images derived from the unseen sensor such that they are similar to the ones obtained from the fingerprints whose ground truth roi masks are available for training. Specifically, we propose a recurrent adversarial learning based feature alignment network that helps the fingerprint roi segmentation model to learn sensor-invariant features. Consequently, sensor-invariant features learnt by the proposed roi segmentation model help it to achieve improved segmentation performance on fingerprints acquired from the new sensor. Experiments on publicly available FVC databases demonstrate the efficacy of the proposed work.

* IJCNN 2021  
* IJCNN 2021 (Accepted) 
Viaarxiv icon

Data Uncertainty Guided Noise-aware Preprocessing Of Fingerprints

Jul 02, 2021
Indu Joshi, Ayush Utkarsh, Riya Kothari, Vinod K Kurmi, Antitza Dantcheva, Sumantra Dutta Roy, Prem Kumar Kalra

Figure 1 for Data Uncertainty Guided Noise-aware Preprocessing Of Fingerprints
Figure 2 for Data Uncertainty Guided Noise-aware Preprocessing Of Fingerprints
Figure 3 for Data Uncertainty Guided Noise-aware Preprocessing Of Fingerprints
Figure 4 for Data Uncertainty Guided Noise-aware Preprocessing Of Fingerprints

The effectiveness of fingerprint-based authentication systems on good quality fingerprints is established long back. However, the performance of standard fingerprint matching systems on noisy and poor quality fingerprints is far from satisfactory. Towards this, we propose a data uncertainty-based framework which enables the state-of-the-art fingerprint preprocessing models to quantify noise present in the input image and identify fingerprint regions with background noise and poor ridge clarity. Quantification of noise helps the model two folds: firstly, it makes the objective function adaptive to the noise in a particular input fingerprint and consequently, helps to achieve robust performance on noisy and distorted fingerprint regions. Secondly, it provides a noise variance map which indicates noisy pixels in the input fingerprint image. The predicted noise variance map enables the end-users to understand erroneous predictions due to noise present in the input image. Extensive experimental evaluation on 13 publicly available fingerprint databases, across different architectural choices and two fingerprint processing tasks demonstrate effectiveness of the proposed framework.

* IJCNN 2021 (Accepted) 
Viaarxiv icon

Collaborative Learning to Generate Audio-Video Jointly

Apr 01, 2021
Vinod K Kurmi, Vipul Bajaj, Badri N Patro, K S Venkatesh, Vinay P Namboodiri, Preethi Jyothi

Figure 1 for Collaborative Learning to Generate Audio-Video Jointly
Figure 2 for Collaborative Learning to Generate Audio-Video Jointly
Figure 3 for Collaborative Learning to Generate Audio-Video Jointly
Figure 4 for Collaborative Learning to Generate Audio-Video Jointly

There have been a number of techniques that have demonstrated the generation of multimedia data for one modality at a time using GANs, such as the ability to generate images, videos, and audio. However, so far, the task of multi-modal generation of data, specifically for audio and videos both, has not been sufficiently well-explored. Towards this, we propose a method that demonstrates that we are able to generate naturalistic samples of video and audio data by the joint correlated generation of audio and video modalities. The proposed method uses multiple discriminators to ensure that the audio, video, and the joint output are also indistinguishable from real-world samples. We present a dataset for this task and show that we are able to generate realistic samples. This method is validated using various standard metrics such as Inception Score, Frechet Inception Distance (FID) and through human evaluation.

* ICASSP 2021 (Accepted) 
Viaarxiv icon

Domain Impression: A Source Data Free Domain Adaptation Method

Feb 17, 2021
Vinod K Kurmi, Venkatesh K Subramanian, Vinay P Namboodiri

Figure 1 for Domain Impression: A Source Data Free Domain Adaptation Method
Figure 2 for Domain Impression: A Source Data Free Domain Adaptation Method
Figure 3 for Domain Impression: A Source Data Free Domain Adaptation Method
Figure 4 for Domain Impression: A Source Data Free Domain Adaptation Method

Unsupervised Domain adaptation methods solve the adaptation problem for an unlabeled target set, assuming that the source dataset is available with all labels. However, the availability of actual source samples is not always possible in practical cases. It could be due to memory constraints, privacy concerns, and challenges in sharing data. This practical scenario creates a bottleneck in the domain adaptation problem. This paper addresses this challenging scenario by proposing a domain adaptation technique that does not need any source data. Instead of the source data, we are only provided with a classifier that is trained on the source data. Our proposed approach is based on a generative framework, where the trained classifier is used for generating samples from the source classes. We learn the joint distribution of data by using the energy-based modeling of the trained classifier. At the same time, a new classifier is also adapted for the target domain. We perform various ablation analysis under different experimental setups and demonstrate that the proposed approach achieves better results than the baseline models in this extremely novel scenario.

* Published- WACV-2021 
Viaarxiv icon

Do Not Forget to Attend to Uncertainty while Mitigating Catastrophic Forgetting

Feb 03, 2021
Vinod K Kurmi, Badri N. Patro, Venkatesh K. Subramanian, Vinay P. Namboodiri

Figure 1 for Do Not Forget to Attend to Uncertainty while Mitigating Catastrophic Forgetting
Figure 2 for Do Not Forget to Attend to Uncertainty while Mitigating Catastrophic Forgetting
Figure 3 for Do Not Forget to Attend to Uncertainty while Mitigating Catastrophic Forgetting
Figure 4 for Do Not Forget to Attend to Uncertainty while Mitigating Catastrophic Forgetting

One of the major limitations of deep learning models is that they face catastrophic forgetting in an incremental learning scenario. There have been several approaches proposed to tackle the problem of incremental learning. Most of these methods are based on knowledge distillation and do not adequately utilize the information provided by older task models, such as uncertainty estimation in predictions. The predictive uncertainty provides the distributional information can be applied to mitigate catastrophic forgetting in a deep learning framework. In the proposed work, we consider a Bayesian formulation to obtain the data and model uncertainties. We also incorporate self-attention framework to address the incremental learning problem. We define distillation losses in terms of aleatoric uncertainty and self-attention. In the proposed work, we investigate different ablation analyses on these losses. Furthermore, we are able to obtain better results in terms of accuracy on standard benchmarks.

* WACV 2021  
* Accepted WACV 2021 
Viaarxiv icon