Alert button
Picture for Santanu Chaudhury

Santanu Chaudhury

Alert button

Deriving Explanation of Deep Visual Saliency Models

Sep 08, 2021
Sai Phani Kumar Malladi, Jayanta Mukhopadhyay, Chaker Larabi, Santanu Chaudhury

Figure 1 for Deriving Explanation of Deep Visual Saliency Models
Figure 2 for Deriving Explanation of Deep Visual Saliency Models
Figure 3 for Deriving Explanation of Deep Visual Saliency Models
Figure 4 for Deriving Explanation of Deep Visual Saliency Models

Deep neural networks have shown their profound impact on achieving human level performance in visual saliency prediction. However, it is still unclear how they learn the task and what it means in terms of understanding human visual system. In this work, we develop a technique to derive explainable saliency models from their corresponding deep neural architecture based saliency models by applying human perception theories and the conventional concepts of saliency. This technique helps us understand the learning pattern of the deep network at its intermediate layers through their activation maps. Initially, we consider two state-of-the-art deep saliency models, namely UNISAL and MSI-Net for our interpretation. We use a set of biologically plausible log-gabor filters for identifying and reconstructing the activation maps of them using our explainable saliency model. The final saliency map is generated using these reconstructed activation maps. We also build our own deep saliency model named cross-concatenated multi-scale residual block based network (CMRNet) for saliency prediction. Then, we evaluate and compare the performance of the explainable models derived from UNISAL, MSI-Net and CMRNet on three benchmark datasets with other state-of-the-art methods. Hence, we propose that this approach of explainability can be applied to any deep visual saliency model for interpretation which makes it a generic one.

Viaarxiv icon

Understanding Character Recognition using Visual Explanations Derived from the Human Visual System and Deep Networks

Aug 29, 2021
Chetan Ralekar, Shubham Choudhary, Tapan Kumar Gandhi, Santanu Chaudhury

Figure 1 for Understanding Character Recognition using Visual Explanations Derived from the Human Visual System and Deep Networks
Figure 2 for Understanding Character Recognition using Visual Explanations Derived from the Human Visual System and Deep Networks
Figure 3 for Understanding Character Recognition using Visual Explanations Derived from the Human Visual System and Deep Networks
Figure 4 for Understanding Character Recognition using Visual Explanations Derived from the Human Visual System and Deep Networks

Human observers engage in selective information uptake when classifying visual patterns. The same is true of deep neural networks, which currently constitute the best performing artificial vision systems. Our goal is to examine the congruence, or lack thereof, in the information-gathering strategies of the two systems. We have operationalized our investigation as a character recognition task. We have used eye-tracking to assay the spatial distribution of information hotspots for humans via fixation maps and an activation mapping technique for obtaining analogous distributions for deep networks through visualization maps. Qualitative comparison between visualization maps and fixation maps reveals an interesting correlate of congruence. The deep learning model considered similar regions in character, which humans have fixated in the case of correctly classified characters. On the other hand, when the focused regions are different for humans and deep nets, the characters are typically misclassified by the latter. Hence, we propose to use the visual fixation maps obtained from the eye-tracking experiment as a supervisory input to align the model's focus on relevant character regions. We find that such supervision improves the model's performance significantly and does not require any additional parameters. This approach has the potential to find applications in diverse domains such as medical analysis and surveillance in which explainability helps to determine system fidelity.

Viaarxiv icon

Multi-Task Driven Explainable Diagnosis of COVID-19 using Chest X-ray Images

Aug 03, 2020
Aakarsh Malhotra, Surbhi Mittal, Puspita Majumdar, Saheb Chhabra, Kartik Thakral, Mayank Vatsa, Richa Singh, Santanu Chaudhury, Ashwin Pudrod, Anjali Agrawal

Figure 1 for Multi-Task Driven Explainable Diagnosis of COVID-19 using Chest X-ray Images
Figure 2 for Multi-Task Driven Explainable Diagnosis of COVID-19 using Chest X-ray Images
Figure 3 for Multi-Task Driven Explainable Diagnosis of COVID-19 using Chest X-ray Images
Figure 4 for Multi-Task Driven Explainable Diagnosis of COVID-19 using Chest X-ray Images

With increasing number of COVID-19 cases globally, all the countries are ramping up the testing numbers. While the RT-PCR kits are available in sufficient quantity in several countries, others are facing challenges with limited availability of testing kits and processing centers in remote areas. This has motivated researchers to find alternate methods of testing which are reliable, easily accessible and faster. Chest X-Ray is one of the modalities that is gaining acceptance as a screening modality. Towards this direction, the paper has two primary contributions. Firstly, we present the COVID-19 Multi-Task Network which is an automated end-to-end network for COVID-19 screening. The proposed network not only predicts whether the CXR has COVID-19 features present or not, it also performs semantic segmentation of the regions of interest to make the model explainable. Secondly, with the help of medical professionals, we manually annotate the lung regions of 9000 frontal chest radiographs taken from ChestXray-14, CheXpert and a consolidated COVID-19 dataset. Further, 200 chest radiographs pertaining to COVID-19 patients are also annotated for semantic segmentation. This database will be released to the research community.

Viaarxiv icon

Compressive sensing based privacy for fall detection

Jan 10, 2020
Ronak Gupta, Prashant Anand, Santanu Chaudhury, Brejesh Lall, Sanjay Singh

Figure 1 for Compressive sensing based privacy for fall detection
Figure 2 for Compressive sensing based privacy for fall detection
Figure 3 for Compressive sensing based privacy for fall detection
Figure 4 for Compressive sensing based privacy for fall detection

Fall detection holds immense importance in the field of healthcare, where timely detection allows for instant medical assistance. In this context, we propose a 3D ConvNet architecture which consists of 3D Inception modules for fall detection. The proposed architecture is a custom version of Inflated 3D (I3D) architecture, that takes compressed measurements of video sequence as spatio-temporal input, obtained from compressive sensing framework, rather than video sequence as input, as in the case of I3D convolutional neural network. This is adopted since privacy raises a huge concern for patients being monitored through these RGB cameras. The proposed framework for fall detection is flexible enough with respect to a wide variety of measurement matrices. Ten action classes randomly selected from Kinetics-400 with no fall examples, are employed to train our 3D ConvNet post compressive sensing with different types of sensing matrices on the original video clips. Our results show that 3D ConvNet performance remains unchanged with different sensing matrices. Also, the performance obtained with Kinetics pre-trained 3D ConvNet on compressively sensed fall videos from benchmark datasets is better than the state-of-the-art techniques.

* accepted in NCVPRIPG 2019 
Viaarxiv icon

DSAL-GAN: Denoising based Saliency Prediction with Generative Adversarial Networks

Apr 02, 2019
Prerana Mukherjee, Manoj Sharma, Megh Makwana, Ajay Pratap Singh, Avinash Upadhyay, Akkshita Trivedi, Brejesh Lall, Santanu Chaudhury

Figure 1 for DSAL-GAN: Denoising based Saliency Prediction with Generative Adversarial Networks
Figure 2 for DSAL-GAN: Denoising based Saliency Prediction with Generative Adversarial Networks
Figure 3 for DSAL-GAN: Denoising based Saliency Prediction with Generative Adversarial Networks
Figure 4 for DSAL-GAN: Denoising based Saliency Prediction with Generative Adversarial Networks

Synthesizing high quality saliency maps from noisy images is a challenging problem in computer vision and has many practical applications. Samples generated by existing techniques for saliency detection cannot handle the noise perturbations smoothly and fail to delineate the salient objects present in the given scene. In this paper, we present a novel end-to-end coupled Denoising based Saliency Prediction with Generative Adversarial Network (DSAL-GAN) framework to address the problem of salient object detection in noisy images. DSAL-GAN consists of two generative adversarial-networks (GAN) trained end-to-end to perform denoising and saliency prediction altogether in a holistic manner. The first GAN consists of a generator which denoises the noisy input image, and in the discriminator counterpart we check whether the output is a denoised image or ground truth original image. The second GAN predicts the saliency maps from raw pixels of the input denoised image using a data-driven metric based on saliency prediction method with adversarial loss. Cycle consistency loss is also incorporated to further improve salient region prediction. We demonstrate with comprehensive evaluation that the proposed framework outperforms several baseline saliency models on various performance benchmarks.

Viaarxiv icon

NEMGAN: Noise Engineered Mode-matching GAN

Nov 08, 2018
Deepak Mishra, Prathosh AP, Aravind J, Prashant Pandey, Santanu Chaudhury

Figure 1 for NEMGAN: Noise Engineered Mode-matching GAN
Figure 2 for NEMGAN: Noise Engineered Mode-matching GAN
Figure 3 for NEMGAN: Noise Engineered Mode-matching GAN
Figure 4 for NEMGAN: Noise Engineered Mode-matching GAN

Conditional generation refers to the process of sampling from an unknown distribution conditioned on semantics of the data. This can be achieved by augmenting the generative model with the desired semantic labels, albeit it is not straightforward in an unsupervised setting where the semantic label of every data sample is unknown. In this paper, we address this issue by proposing a method that can generate samples conditioned on the properties of a latent distribution engineered in accordance with a certain data prior. In particular, a latent space inversion network is trained in tandem with a generative adversarial network such that the modal properties of the latent space distribution are induced in the data generating distribution. We demonstrate that our model despite being fully unsupervised, is effective in learning meaningful representations through its mode matching property. We validate our method on multiple unsupervised tasks such as conditional generation, attribute discovery and inference using three real world image datasets namely MNIST, CIFAR-10 and CelebA and show that the results are comparable to the state-of-the-art methods.

Viaarxiv icon

Locate, Segment and Match: A Pipeline for Object Matching and Registration

May 01, 2018
Deepak Mishra, Rajeev Ranjan, Santanu Chaudhury, Mukul Sarkar, Arvinder Singh Soin

Figure 1 for Locate, Segment and Match: A Pipeline for Object Matching and Registration
Figure 2 for Locate, Segment and Match: A Pipeline for Object Matching and Registration
Figure 3 for Locate, Segment and Match: A Pipeline for Object Matching and Registration
Figure 4 for Locate, Segment and Match: A Pipeline for Object Matching and Registration

Image registration requires simultaneous processing of multiple images to match the keypoints or landmarks of the contained objects. These images often come from different modalities for example CT and Ultrasound (US), and pose the challenge of establishing one to one correspondence. In this work, a novel pipeline of convolutional neural networks is developed to perform the desired registration. The complete objective is divided into three parts: localization of the object of interest, segmentation and matching transformation. Most of the existing approaches skip the localization step and are prone to fail in general scenarios. We overcome this challenge through detection which also establishes initial correspondence between the images. A modified version of single shot multibox detector is used for this purpose. The detected region is cropped and subsequently segmented to generate a mask corresponding to the desired object. The mask is used by a spatial transformer network employing thin plate spline deformation to perform the desired registration. Initial experiments on MNIST and Caltech-101 datasets show that the proposed model is able to accurately match the segmented images. The proposed framework is extended to the registration of CT and US images, which is free from any data specific assumptions and has better generalization capability as compared to the existing rule based/classical approaches.

* 8 pages, 4 figures 
Viaarxiv icon

Unsupervised Despeckling

Jan 10, 2018
Deepak Mishra, Santanu Chaudhury, Mukul Sarkar, Arvinder Singh Soin

Figure 1 for Unsupervised Despeckling
Figure 2 for Unsupervised Despeckling
Figure 3 for Unsupervised Despeckling
Figure 4 for Unsupervised Despeckling

Contrast and quality of ultrasound images are adversely affected by the excessive presence of speckle. However, being an inherent imaging property, speckle helps in tissue characterization and tracking. Thus, despeckling of the ultrasound images requires the reduction of speckle extent without any oversmoothing. In this letter, we aim to address the despeckling problem using an unsupervised deep adversarial approach. A despeckling residual neural network (DRNN) is trained with an adversarial loss imposed by a discriminator. The discriminator tries to differentiate between the despeckled images generated by the DRNN and the set of high-quality images. Further to prevent the developed DRNN from oversmoothing, a structural loss term is used along with the adversarial loss. Experimental evaluations show that the proposed DRNN is able to outperform the state-of-the-art despeckling approaches.

Viaarxiv icon

Drought Stress Classification using 3D Plant Models

Oct 18, 2017
Siddharth Srivastava, Swati Bhugra, Brejesh Lall, Santanu Chaudhury

Figure 1 for Drought Stress Classification using 3D Plant Models
Figure 2 for Drought Stress Classification using 3D Plant Models
Figure 3 for Drought Stress Classification using 3D Plant Models

Quantification of physiological changes in plants can capture different drought mechanisms and assist in selection of tolerant varieties in a high throughput manner. In this context, an accurate 3D model of plant canopy provides a reliable representation for drought stress characterization in contrast to using 2D images. In this paper, we propose a novel end-to-end pipeline including 3D reconstruction, segmentation and feature extraction, leveraging deep neural networks at various stages, for drought stress study. To overcome the high degree of self-similarities and self-occlusions in plant canopy, prior knowledge of leaf shape based on features from deep siamese network are used to construct an accurate 3D model using structure from motion on wheat plants. The drought stress is characterized with a deep network based feature aggregation. We compare the proposed methodology on several descriptors, and show that the network outperforms conventional methods.

* Appears in Workshop on Computer Vision Problems in Plant Phenotyping (CVPPP), International Conference on Computer Vision (ICCV) 2017 
Viaarxiv icon