Alert button
Picture for Sangram Ganguly

Sangram Ganguly

Alert button

DeepSat V2: Feature Augmented Convolutional Neural Nets for Satellite Image Classification

Nov 15, 2019
Qun Liu, Saikat Basu, Sangram Ganguly, Supratik Mukhopadhyay, Robert DiBiano, Manohar Karki, Ramakrishna Nemani

Figure 1 for DeepSat V2: Feature Augmented Convolutional Neural Nets for Satellite Image Classification
Figure 2 for DeepSat V2: Feature Augmented Convolutional Neural Nets for Satellite Image Classification
Figure 3 for DeepSat V2: Feature Augmented Convolutional Neural Nets for Satellite Image Classification
Figure 4 for DeepSat V2: Feature Augmented Convolutional Neural Nets for Satellite Image Classification

Satellite image classification is a challenging problem that lies at the crossroads of remote sensing, computer vision, and machine learning. Due to the high variability inherent in satellite data, most of the current object classification approaches are not suitable for handling satellite datasets. The progress of satellite image analytics has also been inhibited by the lack of a single labeled high-resolution dataset with multiple class labels. In a preliminary version of this work, we introduced two new high resolution satellite imagery datasets (SAT-4 and SAT-6) and proposed DeepSat framework for classification based on "handcrafted" features and a deep belief network (DBN). The present paper is an extended version, we present an end-to-end framework leveraging an improved architecture that augments a convolutional neural network (CNN) with handcrafted features (instead of using DBN-based architecture) for classification. Our framework, having access to fused spatial information obtained from handcrafted features as well as CNN feature maps, have achieved accuracies of 99.90% and 99.84% respectively, on SAT-4 and SAT-6, surpassing all the other state-of-the-art results. A statistical analysis based on Distribution Separability Criterion substantiates the robustness of our approach in learning better representations for satellite imagery.

* This is an Accepted Manuscript of an article published by Taylor & Francis Group in Remote Sensing Letters. arXiv admin note: text overlap with arXiv:1509.03602 
Viaarxiv icon

Progressively Growing Generative Adversarial Networks for High Resolution Semantic Segmentation of Satellite Images

Feb 12, 2019
Edward Collier, Kate Duffy, Sangram Ganguly, Geri Madanguit, Subodh Kalia, Gayaka Shreekant, Ramakrishna Nemani, Andrew Michaelis, Shuang Li, Auroop Ganguly, Supratik Mukhopadhyay

Figure 1 for Progressively Growing Generative Adversarial Networks for High Resolution Semantic Segmentation of Satellite Images
Figure 2 for Progressively Growing Generative Adversarial Networks for High Resolution Semantic Segmentation of Satellite Images
Figure 3 for Progressively Growing Generative Adversarial Networks for High Resolution Semantic Segmentation of Satellite Images
Figure 4 for Progressively Growing Generative Adversarial Networks for High Resolution Semantic Segmentation of Satellite Images

Machine learning has proven to be useful in classification and segmentation of images. In this paper, we evaluate a training methodology for pixel-wise segmentation on high resolution satellite images using progressive growing of generative adversarial networks. We apply our model to segmenting building rooftops and compare these results to conventional methods for rooftop segmentation. We present our findings using the SpaceNet version 2 dataset. Progressive GAN training achieved a test accuracy of 93% compared to 89% for traditional GAN training.

* Accepted too and presented at DMESS 2018 as part of IEEE ICDM 2018 
Viaarxiv icon

Quantifying Uncertainty in Discrete-Continuous and Skewed Data with Bayesian Deep Learning

May 24, 2018
Thomas Vandal, Evan Kodra, Jennifer Dy, Sangram Ganguly, Ramakrishna Nemani, Auroop R. Ganguly

Figure 1 for Quantifying Uncertainty in Discrete-Continuous and Skewed Data with Bayesian Deep Learning
Figure 2 for Quantifying Uncertainty in Discrete-Continuous and Skewed Data with Bayesian Deep Learning
Figure 3 for Quantifying Uncertainty in Discrete-Continuous and Skewed Data with Bayesian Deep Learning
Figure 4 for Quantifying Uncertainty in Discrete-Continuous and Skewed Data with Bayesian Deep Learning

Deep Learning (DL) methods have been transforming computer vision with innovative adaptations to other domains including climate change. For DL to pervade Science and Engineering (S&E) applications where risk management is a core component, well-characterized uncertainty estimates must accompany predictions. However, S&E observations and model-simulations often follow heavily skewed distributions and are not well modeled with DL approaches, since they usually optimize a Gaussian, or Euclidean, likelihood loss. Recent developments in Bayesian Deep Learning (BDL), which attempts to capture uncertainties from noisy observations, aleatoric, and from unknown model parameters, epistemic, provide us a foundation. Here we present a discrete-continuous BDL model with Gaussian and lognormal likelihoods for uncertainty quantification (UQ). We demonstrate the approach by developing UQ estimates on `DeepSD', a super-resolution based DL model for Statistical Downscaling (SD) in climate applied to precipitation, which follows an extremely skewed distribution. We find that the discrete-continuous models outperform a basic Gaussian distribution in terms of predictive accuracy and uncertainty calibration. Furthermore, we find that the lognormal distribution, which can handle skewed distributions, produces quality uncertainty estimates at the extremes. Such results may be important across S&E, as well as other domains such as finance and economics, where extremes are often of significant interest. Furthermore, to our knowledge, this is the first UQ model in SD where both aleatoric and epistemic uncertainties are characterized.

* The 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, August 19--23, 2018, London, United Kingdom  
* 10 Pages 
Viaarxiv icon

DeepSD: Generating High Resolution Climate Change Projections through Single Image Super-Resolution

Mar 09, 2017
Thomas Vandal, Evan Kodra, Sangram Ganguly, Andrew Michaelis, Ramakrishna Nemani, Auroop R Ganguly

Figure 1 for DeepSD: Generating High Resolution Climate Change Projections through Single Image Super-Resolution
Figure 2 for DeepSD: Generating High Resolution Climate Change Projections through Single Image Super-Resolution
Figure 3 for DeepSD: Generating High Resolution Climate Change Projections through Single Image Super-Resolution
Figure 4 for DeepSD: Generating High Resolution Climate Change Projections through Single Image Super-Resolution

The impacts of climate change are felt by most critical systems, such as infrastructure, ecological systems, and power-plants. However, contemporary Earth System Models (ESM) are run at spatial resolutions too coarse for assessing effects this localized. Local scale projections can be obtained using statistical downscaling, a technique which uses historical climate observations to learn a low-resolution to high-resolution mapping. Depending on statistical modeling choices, downscaled projections have been shown to vary significantly terms of accuracy and reliability. The spatio-temporal nature of the climate system motivates the adaptation of super-resolution image processing techniques to statistical downscaling. In our work, we present DeepSD, a generalized stacked super resolution convolutional neural network (SRCNN) framework for statistical downscaling of climate variables. DeepSD augments SRCNN with multi-scale input channels to maximize predictability in statistical downscaling. We provide a comparison with Bias Correction Spatial Disaggregation as well as three Automated-Statistical Downscaling approaches in downscaling daily precipitation from 1 degree (~100km) to 1/8 degrees (~12.5km) over the Continental United States. Furthermore, a framework using the NASA Earth Exchange (NEX) platform is discussed for downscaling more than 20 ESM models with multiple emission scenarios.

* 9 pages, 5 Figures, 2 Tables 
Viaarxiv icon

A Theoretical Analysis of Deep Neural Networks for Texture Classification

Jun 21, 2016
Saikat Basu, Manohar Karki, Robert DiBiano, Supratik Mukhopadhyay, Sangram Ganguly, Ramakrishna Nemani, Shreekant Gayaka

Figure 1 for A Theoretical Analysis of Deep Neural Networks for Texture Classification
Figure 2 for A Theoretical Analysis of Deep Neural Networks for Texture Classification
Figure 3 for A Theoretical Analysis of Deep Neural Networks for Texture Classification
Figure 4 for A Theoretical Analysis of Deep Neural Networks for Texture Classification

We investigate the use of Deep Neural Networks for the classification of image datasets where texture features are important for generating class-conditional discriminative representations. To this end, we first derive the size of the feature space for some standard textural features extracted from the input dataset and then use the theory of Vapnik-Chervonenkis dimension to show that hand-crafted feature extraction creates low-dimensional representations which help in reducing the overall excess error rate. As a corollary to this analysis, we derive for the first time upper bounds on the VC dimension of Convolutional Neural Network as well as Dropout and Dropconnect networks and the relation between excess error rate of Dropout and Dropconnect networks. The concept of intrinsic dimension is used to validate the intuition that texture-based datasets are inherently higher dimensional as compared to handwritten digits or other object recognition datasets and hence more difficult to be shattered by neural networks. We then derive the mean distance from the centroid to the nearest and farthest sampling points in an n-dimensional manifold and show that the Relative Contrast of the sample data vanishes as dimensionality of the underlying vector space tends to infinity.

* Accepted in International Joint Conference on Neural Networks, IJCNN 2016 
Viaarxiv icon

DeepSat - A Learning framework for Satellite Imagery

Sep 11, 2015
Saikat Basu, Sangram Ganguly, Supratik Mukhopadhyay, Robert DiBiano, Manohar Karki, Ramakrishna Nemani

Figure 1 for DeepSat - A Learning framework for Satellite Imagery
Figure 2 for DeepSat - A Learning framework for Satellite Imagery
Figure 3 for DeepSat - A Learning framework for Satellite Imagery
Figure 4 for DeepSat - A Learning framework for Satellite Imagery

Satellite image classification is a challenging problem that lies at the crossroads of remote sensing, computer vision, and machine learning. Due to the high variability inherent in satellite data, most of the current object classification approaches are not suitable for handling satellite datasets. The progress of satellite image analytics has also been inhibited by the lack of a single labeled high-resolution dataset with multiple class labels. The contributions of this paper are twofold - (1) first, we present two new satellite datasets called SAT-4 and SAT-6, and (2) then, we propose a classification framework that extracts features from an input image, normalizes them and feeds the normalized feature vectors to a Deep Belief Network for classification. On the SAT-4 dataset, our best network produces a classification accuracy of 97.95% and outperforms three state-of-the-art object recognition algorithms, namely - Deep Belief Networks, Convolutional Neural Networks and Stacked Denoising Autoencoders by ~11%. On SAT-6, it produces a classification accuracy of 93.9% and outperforms the other algorithms by ~15%. Comparative studies with a Random Forest classifier show the advantage of an unsupervised learning approach over traditional supervised learning techniques. A statistical analysis based on Distribution Separability Criterion and Intrinsic Dimensionality Estimation substantiates the effectiveness of our approach in learning better representations for satellite imagery.

* Paper was accepted at ACM SIGSPATIAL 2015 
Viaarxiv icon

Learning Sparse Feature Representations using Probabilistic Quadtrees and Deep Belief Nets

Sep 11, 2015
Saikat Basu, Manohar Karki, Sangram Ganguly, Robert DiBiano, Supratik Mukhopadhyay, Ramakrishna Nemani

Figure 1 for Learning Sparse Feature Representations using Probabilistic Quadtrees and Deep Belief Nets
Figure 2 for Learning Sparse Feature Representations using Probabilistic Quadtrees and Deep Belief Nets
Figure 3 for Learning Sparse Feature Representations using Probabilistic Quadtrees and Deep Belief Nets

Learning sparse feature representations is a useful instrument for solving an unsupervised learning problem. In this paper, we present three labeled handwritten digit datasets, collectively called n-MNIST. Then, we propose a novel framework for the classification of handwritten digits that learns sparse representations using probabilistic quadtrees and Deep Belief Nets. On the MNIST and n-MNIST datasets, our framework shows promising results and significantly outperforms traditional Deep Belief Networks.

* Published in the European Symposium on Artificial Neural Networks, ESANN 2015 
Viaarxiv icon