Alert button
Picture for Alina Zare

Alina Zare

Alert button

Shared Manifold Learning Using a Triplet Network for Multiple Sensor Translation and Fusion with Missing Data

Oct 25, 2022
Aditya Dutt, Alina Zare, Paul Gader

Figure 1 for Shared Manifold Learning Using a Triplet Network for Multiple Sensor Translation and Fusion with Missing Data
Figure 2 for Shared Manifold Learning Using a Triplet Network for Multiple Sensor Translation and Fusion with Missing Data
Figure 3 for Shared Manifold Learning Using a Triplet Network for Multiple Sensor Translation and Fusion with Missing Data
Figure 4 for Shared Manifold Learning Using a Triplet Network for Multiple Sensor Translation and Fusion with Missing Data

Heterogeneous data fusion can enhance the robustness and accuracy of an algorithm on a given task. However, due to the difference in various modalities, aligning the sensors and embedding their information into discriminative and compact representations is challenging. In this paper, we propose a Contrastive learning based MultiModal Alignment Network (CoMMANet) to align data from different sensors into a shared and discriminative manifold where class information is preserved. The proposed architecture uses a multimodal triplet autoencoder to cluster the latent space in such a way that samples of the same classes from each heterogeneous modality are mapped close to each other. Since all the modalities exist in a shared manifold, a unified classification framework is proposed. The resulting latent space representations are fused to perform more robust and accurate classification. In a missing sensor scenario, the latent space of one sensor is easily and efficiently predicted using another sensor's latent space, thereby allowing sensor translation. We conducted extensive experiments on a manually labeled multimodal dataset containing hyperspectral data from AVIRIS-NG and NEON, and LiDAR (light detection and ranging) data from NEON. Lastly, the model is validated on two benchmark datasets: Berlin Dataset (hyperspectral and synthetic aperture radar) and MUUFL Gulfport Dataset (hyperspectral and LiDAR). A comparison made with other methods demonstrates the superiority of this method. We achieved a mean overall accuracy of 94.3% on the MUUFL dataset and the best overall accuracy of 71.26% on the Berlin dataset, which is better than other state-of-the-art approaches.

* 19 pages, 16 figures; Accepted to IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 
Viaarxiv icon

Histogram Layers for Synthetic Aperture Sonar Imagery

Sep 08, 2022
Joshua Peeples, Alina Zare, Jeffrey Dale, James Keller

Figure 1 for Histogram Layers for Synthetic Aperture Sonar Imagery
Figure 2 for Histogram Layers for Synthetic Aperture Sonar Imagery
Figure 3 for Histogram Layers for Synthetic Aperture Sonar Imagery
Figure 4 for Histogram Layers for Synthetic Aperture Sonar Imagery

Synthetic aperture sonar (SAS) imagery is crucial for several applications, including target recognition and environmental segmentation. Deep learning models have led to much success in SAS analysis; however, the features extracted by these approaches may not be suitable for capturing certain textural information. To address this problem, we present a novel application of histogram layers on SAS imagery. The addition of histogram layer(s) within the deep learning models improved performance by incorporating statistical texture information on both synthetic and real-world datasets.

* 7 pages, 9 Figures, Accepted to IEEE International Conference on Machine Learning and Applications (ICMLA) 2022 
Viaarxiv icon

PRMI: A Dataset of Minirhizotron Images for Diverse Plant Root Study

Jan 20, 2022
Weihuang Xu, Guohao Yu, Yiming Cui, Romain Gloaguen, Alina Zare, Jason Bonnette, Joel Reyes-Cabrera, Ashish Rajurkar, Diane Rowland, Roser Matamala, Julie D. Jastrow, Thomas E. Juenger, Felix B. Fritschi

Figure 1 for PRMI: A Dataset of Minirhizotron Images for Diverse Plant Root Study
Figure 2 for PRMI: A Dataset of Minirhizotron Images for Diverse Plant Root Study
Figure 3 for PRMI: A Dataset of Minirhizotron Images for Diverse Plant Root Study
Figure 4 for PRMI: A Dataset of Minirhizotron Images for Diverse Plant Root Study

Understanding a plant's root system architecture (RSA) is crucial for a variety of plant science problem domains including sustainability and climate adaptation. Minirhizotron (MR) technology is a widely-used approach for phenotyping RSA non-destructively by capturing root imagery over time. Precisely segmenting roots from the soil in MR imagery is a critical step in studying RSA features. In this paper, we introduce a large-scale dataset of plant root images captured by MR technology. In total, there are over 72K RGB root images across six different species including cotton, papaya, peanut, sesame, sunflower, and switchgrass in the dataset. The images span a variety of conditions including varied root age, root structures, soil types, and depths under the soil surface. All of the images have been annotated with weak image-level labels indicating whether each image contains roots or not. The image-level labels can be used to support weakly supervised learning in plant root segmentation tasks. In addition, 63K images have been manually annotated to generate pixel-level binary masks indicating whether each pixel corresponds to root or not. These pixel-level binary masks can be used as ground truth for supervised learning in semantic segmentation tasks. By introducing this dataset, we aim to facilitate the automatic segmentation of roots and the research of RSA with deep learning and other image analysis algorithms.

* The 36th AAAI Conference on the AI for Agriculture and Food Systems (AIAFS) Workshop 
Viaarxiv icon

Image-to-Height Domain Translation for Synthetic Aperture Sonar

Dec 12, 2021
Dylan Stewart, Shawn Johnson, Alina Zare

Figure 1 for Image-to-Height Domain Translation for Synthetic Aperture Sonar
Figure 2 for Image-to-Height Domain Translation for Synthetic Aperture Sonar
Figure 3 for Image-to-Height Domain Translation for Synthetic Aperture Sonar
Figure 4 for Image-to-Height Domain Translation for Synthetic Aperture Sonar

Observations of seabed texture with synthetic aperture sonar are dependent upon several factors. In this work, we focus on collection geometry with respect to isotropic and anisotropic textures. The low grazing angle of the collection geometry, combined with orientation of the sonar path relative to anisotropic texture, poses a significant challenge for image-alignment and other multi-view scene understanding frameworks. We previously proposed using features captured from estimated seabed relief to improve scene understanding. While several methods have been developed to estimate seabed relief via intensity, no large-scale study exists in the literature. Furthermore, a dataset of coregistered seabed relief maps and sonar imagery is nonexistent to learn this domain translation. We address these problems by producing a large simulated dataset containing coregistered pairs of seabed relief and intensity maps from two unique sonar data simulation techniques. We apply three types of models, with varying complexity, to translate intensity imagery to seabed relief: a Gaussian Markov Random Field approach (GMRF), a conditional Generative Adversarial Network (cGAN), and UNet architectures. Methods are compared in reference to the coregistered simulated datasets using L1 error. Additionally, predictions on simulated and real SAS imagery are shown. Finally, models are compared on two datasets of hand-aligned SAS imagery and evaluated in terms of L1 error across multiple aspects in comparison to using intensity. Our comprehensive experiments show that the proposed UNet architectures outperform the GMRF and pix2pix cGAN models on seabed relief estimation for simulated and real SAS imagery.

Viaarxiv icon

Cross-Layered Distributed Data-driven Framework For Enhanced Smart Grid Cyber-Physical Security

Nov 10, 2021
Allen Starke, Keerthiraj Nagaraj, Cody Ruben, Nader Aljohani, Sheng Zou, Arturo Bretas, Janise McNair, Alina Zare

Figure 1 for Cross-Layered Distributed Data-driven Framework For Enhanced Smart Grid Cyber-Physical Security
Figure 2 for Cross-Layered Distributed Data-driven Framework For Enhanced Smart Grid Cyber-Physical Security
Figure 3 for Cross-Layered Distributed Data-driven Framework For Enhanced Smart Grid Cyber-Physical Security
Figure 4 for Cross-Layered Distributed Data-driven Framework For Enhanced Smart Grid Cyber-Physical Security

Smart Grid (SG) research and development has drawn much attention from academia, industry and government due to the great impact it will have on society, economics and the environment. Securing the SG is a considerably significant challenge due the increased dependency on communication networks to assist in physical process control, exposing them to various cyber-threats. In addition to attacks that change measurement values using False Data Injection (FDI) techniques, attacks on the communication network may disrupt the power system's real-time operation by intercepting messages, or by flooding the communication channels with unnecessary data. Addressing these attacks requires a cross-layer approach. In this paper a cross-layered strategy is presented, called Cross-Layer Ensemble CorrDet with Adaptive Statistics(CECD-AS), which integrates the detection of faulty SG measurement data as well as inconsistent network inter-arrival times and transmission delays for more reliable and accurate anomaly detection and attack interpretation. Numerical results show that CECD-AS can detect multiple False Data Injections, Denial of Service (DoS) and Man In The Middle (MITM) attacks with a high F1-score compared to current approaches that only use SG measurement data for detection such as the traditional physics-based State Estimation, Ensemble CorrDet with Adaptive Statistics strategy and other machine learning classification-based detection schemes.

Viaarxiv icon

Robust Semi-Supervised Classification using GANs with Self-Organizing Maps

Oct 19, 2021
Ronald Fick, Paul Gader, Alina Zare

Figure 1 for Robust Semi-Supervised Classification using GANs with Self-Organizing Maps
Figure 2 for Robust Semi-Supervised Classification using GANs with Self-Organizing Maps
Figure 3 for Robust Semi-Supervised Classification using GANs with Self-Organizing Maps
Figure 4 for Robust Semi-Supervised Classification using GANs with Self-Organizing Maps

Generative adversarial networks (GANs) have shown tremendous promise in learning to generate data and effective at aiding semi-supervised classification. However, to this point, semi-supervised GAN methods make the assumption that the unlabeled data set contains only samples of the joint distribution of the classes of interest, referred to as inliers. Consequently, when presented with a sample from other distributions, referred to as outliers, GANs perform poorly at determining that it is not qualified to make a decision on the sample. The problem of discriminating outliers from inliers while maintaining classification accuracy is referred to here as the DOIC problem. In this work, we describe an architecture that combines self-organizing maps (SOMs) with SS-GANS with the goal of mitigating the DOIC problem and experimental results indicating that the architecture achieves the goal. Multiple experiments were conducted on hyperspectral image data sets. The SS-GANS performed slightly better than supervised GANS on classification problems with and without the SOM. Incorporating the SOMs into the SS-GANs and the supervised GANS led to substantially mitigation of the DOIC problem when compared to SS-GANS and GANs without the SOMs. Furthermore, the SS-GANS performed much better than GANS on the DOIC problem, even without the SOMs.

* 9 pages, 13 figures This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible 
Viaarxiv icon

Learnable Adaptive Cosine Estimator (LACE) for Image Classification

Oct 15, 2021
Joshua Peeples, Connor McCurley, Sarah Walker, Dylan Stewart, Alina Zare

Figure 1 for Learnable Adaptive Cosine Estimator (LACE) for Image Classification
Figure 2 for Learnable Adaptive Cosine Estimator (LACE) for Image Classification
Figure 3 for Learnable Adaptive Cosine Estimator (LACE) for Image Classification
Figure 4 for Learnable Adaptive Cosine Estimator (LACE) for Image Classification

In this work, we propose a new loss to improve feature discriminability and classification performance. Motivated by the adaptive cosine/coherence estimator (ACE), our proposed method incorporates angular information that is inherently learned by artificial neural networks. Our learnable ACE (LACE) transforms the data into a new "whitened" space that improves the inter-class separability and intra-class compactness. We compare our LACE to alternative state-of-the art softmax-based and feature regularization approaches. Our results show that the proposed method can serve as a viable alternative to cross entropy and angular softmax approaches. Our code is publicly available: https://github.com/GatorSense/LACE.

* Accepted to WACV 2022; 14 pages (including appendix), 3 figures 
Viaarxiv icon

Possibilistic Fuzzy Local Information C-Means with Automated Feature Selection for Seafloor Segmentation

Oct 14, 2021
Joshua Peeples, Daniel Suen, Alina Zare, James Keller

The Possibilistic Fuzzy Local Information C-Means (PFLICM) method is presented as a technique to segment side-look synthetic aperture sonar (SAS) imagery into distinct regions of the sea-floor. In this work, we investigate and present the results of an automated feature selection approach for SAS image segmentation. The chosen features and resulting segmentation from the image will be assessed based on a select quantitative clustering validity criterion and the subset of the features that reach a desired threshold will be used for the segmentation process.

* Proc. SPIE 10628, Detection and Sensing of Mines, Explosive Objects, and Obscured Targets XXIII (30 April 2018), 14 pages, 7 figures, 5 tables 
Viaarxiv icon

Addressing Annotation Imprecision for Tree Crown Delineation Using the RandCrowns Index

May 05, 2021
Dylan Stewart, Alina Zare, Sergio Marconi, Ben Weinstein, Ethan White, Sarah Graves, Stephanie Bohlman, Aditya Singh

Figure 1 for Addressing Annotation Imprecision for Tree Crown Delineation Using the RandCrowns Index
Figure 2 for Addressing Annotation Imprecision for Tree Crown Delineation Using the RandCrowns Index
Figure 3 for Addressing Annotation Imprecision for Tree Crown Delineation Using the RandCrowns Index
Figure 4 for Addressing Annotation Imprecision for Tree Crown Delineation Using the RandCrowns Index

Supervised methods for object delineation in remote sensing require labeled ground-truth data. Gathering sufficient high quality ground-truth data is difficult, especially when the targets are of irregular shape or difficult to distinguish from the background or neighboring objects. Tree crown delineation provides key information from remote sensing images for forestry, ecology, and management. However, tree crowns in remote sensing imagery are often difficult to label and annotate due to irregular shape, overlapping canopies, shadowing, and indistinct edges. There are also multiple approaches to annotation in this field (e.g., rectangular boxes vs. convex polygons) that further contribute to annotation imprecision. However, current evaluation methods do not account for this uncertainty in annotations, and quantitative metrics for evaluation can vary across multiple annotators. We address these limitations using an adaptation of the Rand index for weakly-labeled crown delineation that we call RandCrowns. The RandCrowns metric reformulates the Rand index by adjusting the areas over which each term of the index is computed to account for uncertain and imprecise object delineation labels. Quantitative comparisons to the commonly used intersection over union (Jaccard similarity) method shows a decrease in the variance generated by differences among multiple annotators. Combined with qualitative examples, our results suggest that this RandCrowns metric is more robust for scoring target delineations in the presence of uncertainty and imprecision in annotations that are inherent to tree crown delineation. Although the focus of this paper is on evaluation of tree crown delineations, annotation imprecision is a challenge that is common across remote sensing of the environment (and many computer vision problems in general).

Viaarxiv icon