Alert button
Picture for Prateek Munjal

Prateek Munjal

Alert button

FAIRS -- Soft Focus Generator and Attention for Robust Object Segmentation from Extreme Points

Apr 04, 2020
Ahmed H. Shahin, Prateek Munjal, Ling Shao, Shadab Khan

Figure 1 for FAIRS -- Soft Focus Generator and Attention for Robust Object Segmentation from Extreme Points
Figure 2 for FAIRS -- Soft Focus Generator and Attention for Robust Object Segmentation from Extreme Points
Figure 3 for FAIRS -- Soft Focus Generator and Attention for Robust Object Segmentation from Extreme Points
Figure 4 for FAIRS -- Soft Focus Generator and Attention for Robust Object Segmentation from Extreme Points

Semantic segmentation from user inputs has been actively studied to facilitate interactive segmentation for data annotation and other applications. Recent studies have shown that extreme points can be effectively used to encode user inputs. A heat map generated from the extreme points can be appended to the RGB image and input to the model for training. In this study, we present FAIRS -- a new approach to generate object segmentation from user inputs in the form of extreme points and corrective clicks. We propose a novel approach for effectively encoding the user input from extreme points and corrective clicks, in a novel and scalable manner that allows the network to work with a variable number of clicks, including corrective clicks for output refinement. We also integrate a dual attention module with our approach to increase the efficacy of the model in preferentially attending to the objects. We demonstrate that these additions help achieve significant improvements over state-of-the-art in dense object segmentation from user inputs, on multiple large-scale datasets. Through experiments, we demonstrate our method's ability to generate high-quality training data as well as its scalability in incorporating extreme points, guiding clicks, and corrective clicks in a principled manner.

Viaarxiv icon

Towards Robust and Reproducible Active Learning Using Neural Networks

Feb 21, 2020
Prateek Munjal, Nasir Hayat, Munawar Hayat, Jamshid Sourati, Shadab Khan

Figure 1 for Towards Robust and Reproducible Active Learning Using Neural Networks
Figure 2 for Towards Robust and Reproducible Active Learning Using Neural Networks
Figure 3 for Towards Robust and Reproducible Active Learning Using Neural Networks
Figure 4 for Towards Robust and Reproducible Active Learning Using Neural Networks

Active learning (AL) is a promising ML paradigm that has the potential to parse through large unlabeled data and help reduce annotation cost in domains where labeling entire data can be prohibitive. Recently proposed neural network based AL methods use different heuristics to accomplish this goal. In this study, we show that recent AL methods offer a gain over random baseline under a brittle combination of experimental conditions. We demonstrate that such marginal gains vanish when experimental factors are changed, leading to reproducibility issues and suggesting that AL methods lack robustness. We also observe that with a properly tuned model, which employs recently proposed regularization techniques, the performance significantly improves for all AL methods including the random sampling baseline, and performance differences among the AL methods become negligible. Based on these observations, we suggest a set of experiments that are critical to assess the true effectiveness of an AL method. To facilitate these experiments we also present an open source toolkit. We believe our findings and recommendations will help advance reproducible research in robust AL using neural networks.

Viaarxiv icon

Implicit Discriminator in Variational Autoencoder

Sep 28, 2019
Prateek Munjal, Akanksha Paul, Narayanan C. Krishnan

Figure 1 for Implicit Discriminator in Variational Autoencoder
Figure 2 for Implicit Discriminator in Variational Autoencoder
Figure 3 for Implicit Discriminator in Variational Autoencoder
Figure 4 for Implicit Discriminator in Variational Autoencoder

Recently generative models have focused on combining the advantages of variational autoencoders (VAE) and generative adversarial networks (GAN) for good reconstruction and generative abilities. In this work we introduce a novel hybrid architecture, Implicit Discriminator in Variational Autoencoder (IDVAE), that combines a VAE and a GAN, which does not need an explicit discriminator network. The fundamental premise of the IDVAE architecture is that the encoder of a VAE and the discriminator of a GAN utilize common features and therefore can be trained as a shared network, while the decoder of the VAE and the generator of the GAN can be combined to learn a single network. This results in a simple two-tier architecture that has the properties of both a VAE and a GAN. The qualitative and quantitative experiments on real-world benchmark datasets demonstrates that IDVAE perform better than the state of the art hybrid approaches. We experimentally validate that IDVAE can be easily extended to work in a conditional setting and demonstrate its performance on complex datasets.

Viaarxiv icon

Semantically Aligned Bias Reducing Zero Shot Learning

Apr 16, 2019
Akanksha Paul, Narayanan C. Krishnan, Prateek Munjal

Figure 1 for Semantically Aligned Bias Reducing Zero Shot Learning
Figure 2 for Semantically Aligned Bias Reducing Zero Shot Learning
Figure 3 for Semantically Aligned Bias Reducing Zero Shot Learning
Figure 4 for Semantically Aligned Bias Reducing Zero Shot Learning

Zero shot learning (ZSL) aims to recognize unseen classes by exploiting semantic relationships between seen and unseen classes. Two major problems faced by ZSL algorithms are the hubness problem and the bias towards the seen classes. Existing ZSL methods focus on only one of these problems in the conventional and generalized ZSL setting. In this work, we propose a novel approach, Semantically Aligned Bias Reducing (SABR) ZSL, which focuses on solving both the problems. It overcomes the hubness problem by learning a latent space that preserves the semantic relationship between the labels while encoding the discriminating information about the classes. Further, we also propose ways to reduce the bias of the seen classes through a simple cross-validation process in the inductive setting and a novel weak transfer constraint in the transductive setting. Extensive experiments on three benchmark datasets suggest that the proposed model significantly outperforms existing state-of-the-art algorithms by ~1.5-9% in the conventional ZSL setting and by ~2-14% in the generalized ZSL for both the inductive and transductive settings.

* Published at the Conference on Computer Vision and Pattern Recognition (CVPR 2019) 
Viaarxiv icon