In this paper, we propose a method using a three dimensional convolutional neural network (3-D-CNN) to fuse together multispectral (MS) and hyperspectral (HS) images to obtain a high resolution hyperspectral image. Dimensionality reduction of the hyperspectral image is performed prior to fusion in order to significantly reduce the computational time and make the method more robust to noise. Experiments are performed on a data set simulated using a real hyperspectral image. The results obtained show that the proposed approach is very promising when compared to conventional methods. This is especially true when the hyperspectral image is corrupted by additive noise.
Remote sensing hyperspectral sensors collect large volumes of high dimensional spectral and spatial data. However, due to spectral and spatial redundancy the true hyperspectral signal lies on a subspace of much lower dimension than the original data. The identification of the signal subspace is a very important first step for most hyperspectral algorithms. In this paper we investigate the important problem of identifying the hyperspectral signal subspace by minimizing the mean squared error (MSE) between the true signal and an estimate of the signal. Since the MSE is uncomputable in practice, due to its dependency on the true signal, we propose a method based on the Stein's unbiased risk estimator (SURE) that provides an unbiased estimate of the MSE. The resulting method is simple and fully automatic and we evaluate it using both simulated and real hyperspectral data sets. Experimental results shows that our proposed method compares well to recent state-of-the-art subspace identification methods.
Big data applications, such as medical imaging and genetics, typically generate datasets that consist of few observations n on many more variables p, a scenario that we denote as p>>n. Traditional data processing methods are often insufficient for extracting information out of big data. This calls for the development of new algorithms that can deal with the size, complexity, and the special structure of such datasets. In this paper, we consider the problem of classifying p>>n data and propose a classification method based on linear discriminant analysis (LDA). Traditional LDA depends on the covariance estimate of the data, but when p>>n the sample covariance estimate is singular. The proposed method estimates the covariance by using a sparse version of noisy principal component analysis (nPCA). The use of sparsity in this setting aims at automatically selecting variables that are relevant for classification. In experiments, the new method is compared to state-of-the art methods for big data problems using both simulated datasets and imaging genetics datasets.
Significant attention has been given to minimizing a penalized least squares criterion for estimating sparse solutions to large linear systems of equations. The penalty is responsible for inducing sparsity and the natural choice is the so-called $l_0$ norm. In this paper we develop a Momentumized Iterative Shrinkage Thresholding (MIST) algorithm for minimizing the resulting non-convex criterion and prove its convergence to a local minimizer. Simulations on large data sets show superior performance of the proposed method to other methods.