Alert button
Picture for Gabriele Meoni

Gabriele Meoni

Alert button

On the Generation of a Synthetic Event-Based Vision Dataset for Navigation and Landing

Aug 01, 2023
Loïc J. Azzalini, Emmanuel Blazquez, Alexander Hadjiivanov, Gabriele Meoni, Dario Izzo

Figure 1 for On the Generation of a Synthetic Event-Based Vision Dataset for Navigation and Landing
Figure 2 for On the Generation of a Synthetic Event-Based Vision Dataset for Navigation and Landing
Figure 3 for On the Generation of a Synthetic Event-Based Vision Dataset for Navigation and Landing
Figure 4 for On the Generation of a Synthetic Event-Based Vision Dataset for Navigation and Landing

An event-based camera outputs an event whenever a change in scene brightness of a preset magnitude is detected at a particular pixel location in the sensor plane. The resulting sparse and asynchronous output coupled with the high dynamic range and temporal resolution of this novel camera motivate the study of event-based cameras for navigation and landing applications. However, the lack of real-world and synthetic datasets to support this line of research has limited its consideration for onboard use. This paper presents a methodology and a software pipeline for generating event-based vision datasets from optimal landing trajectories during the approach of a target body. We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility at different viewpoints along a set of optimal descent trajectories obtained by varying the boundary conditions. The generated image sequences are then converted into event streams by means of an event-based camera emulator. We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories, complete with event streams and motion field ground truth data. We anticipate that novel event-based vision datasets can be generated using this pipeline to support various spacecraft pose reconstruction problems given events as input, and we hope that the proposed methodology would attract the attention of researchers working at the intersection of neuromorphic vision and guidance navigation and control.

Viaarxiv icon

THRawS: A Novel Dataset for Thermal Hotspots Detection in Raw Sentinel-2 Data

May 12, 2023
Gabriele Meoni, Roberto Del Prete, Federico Serva, Alix De Beussche, Olivier Colin, Nicolas Longépé

Figure 1 for THRawS: A Novel Dataset for Thermal Hotspots Detection in Raw Sentinel-2 Data
Figure 2 for THRawS: A Novel Dataset for Thermal Hotspots Detection in Raw Sentinel-2 Data
Figure 3 for THRawS: A Novel Dataset for Thermal Hotspots Detection in Raw Sentinel-2 Data
Figure 4 for THRawS: A Novel Dataset for Thermal Hotspots Detection in Raw Sentinel-2 Data

Nowadays, most of the datasets leveraging space-borne Earth Observation (EO) data are based on high-end levels products, which are ortho-rectified, coregistered, calibrated, and further processed to mitigate the impact of noise and distortions. Nevertheless, given the growing interest to apply Artificial Intelligence (AI) onboard satellites for time-critical applications, such as natural disaster response, providing raw satellite images could be useful to foster the research on energy-efficient pre-processing algorithms and AI models for onboard-satellite applications. In this framework, we present THRawS, the first dataset composed of Sentinel-2 (S-2) raw data containing warm temperature hotspots (wildfires and volcanic eruptions). To foster the realisation of robust AI architectures, the dataset gathers data from all over the globe. Furthermore, we designed a custom methodology to identify events in raw data starting from the corresponding Level-1C (L1C) products. Indeed, given the availability of state-of-the-art algorithms for thermal anomalies detection on the L1C tiles, we detect such events on these latter and we then re-project them on the corresponding raw images. Additionally, to deal with unprocessed data, we devise a lightweight coarse coregisteration and georeferencing strategy. The developed dataset is comprehensive of more than 100 samples containing wildfires, volcanic eruptions, and event-free volcanic areas to enable both warm-events detection and general classification applications. Finally, we compare performances between the proposed coarse spatial coregistration technique and the SuperGlue Deep Neural Network method to highlight the different constraints in terms of timing and quality of spatial registration to minimise the spatial displacement error for a specific scene.

* 13 pages, 7 figures, 3 tables 
Viaarxiv icon

Decentralised Semi-supervised Onboard Learning for Scene Classification in Low-Earth Orbit

May 06, 2023
Johan Östman, Pablo Gomez, Vinutha Magal Shreenath, Gabriele Meoni

Figure 1 for Decentralised Semi-supervised Onboard Learning for Scene Classification in Low-Earth Orbit
Figure 2 for Decentralised Semi-supervised Onboard Learning for Scene Classification in Low-Earth Orbit
Figure 3 for Decentralised Semi-supervised Onboard Learning for Scene Classification in Low-Earth Orbit
Figure 4 for Decentralised Semi-supervised Onboard Learning for Scene Classification in Low-Earth Orbit

Onboard machine learning on the latest satellite hardware offers the potential for significant savings in communication and operational costs. We showcase the training of a machine learning model on a satellite constellation for scene classification using semi-supervised learning while accounting for operational constraints such as temperature and limited power budgets based on satellite processor benchmarks of the neural network. We evaluate mission scenarios employing both decentralised and federated learning approaches. All scenarios achieve convergence to high accuracy (around 91% on EuroSAT RGB dataset) within a one-day mission timeframe.

* Accepted at IAA SSEO 2023 
Viaarxiv icon

Selected Trends in Artificial Intelligence for Space Applications

Dec 17, 2022
Dario Izzo, Gabriele Meoni, Pablo Gómez, Dominik Dold, Alexander Zoechbauer

Figure 1 for Selected Trends in Artificial Intelligence for Space Applications
Figure 2 for Selected Trends in Artificial Intelligence for Space Applications
Figure 3 for Selected Trends in Artificial Intelligence for Space Applications
Figure 4 for Selected Trends in Artificial Intelligence for Space Applications

The development and adoption of artificial intelligence (AI) technologies in space applications is growing quickly as the consensus increases on the potential benefits introduced. As more and more aerospace engineers are becoming aware of new trends in AI, traditional approaches are revisited to consider the applications of emerging AI technologies. Already at the time of writing, the scope of AI-related activities across academia, the aerospace industry and space agencies is so wide that an in-depth review would not fit in these pages. In this chapter we focus instead on two main emerging trends we believe capture the most relevant and exciting activities in the field: differentiable intelligence and on-board machine learning. Differentiable intelligence, in a nutshell, refers to works making extensive use of automatic differentiation frameworks to learn the parameters of machine learning or related models. Onboard machine learning considers the problem of moving inference, as well as learning, onboard. Within these fields, we discuss a few selected projects originating from the European Space Agency's (ESA) Advanced Concepts Team (ACT), giving priority to advanced topics going beyond the transposition of established AI techniques and practices to the space domain.

Viaarxiv icon

Neuromorphic Computing and Sensing in Space

Dec 17, 2022
Dario Izzo, Alexander Hadjiivanov, Dominik Dold, Gabriele Meoni, Emmanuel Blazquez

Figure 1 for Neuromorphic Computing and Sensing in Space
Figure 2 for Neuromorphic Computing and Sensing in Space
Figure 3 for Neuromorphic Computing and Sensing in Space
Figure 4 for Neuromorphic Computing and Sensing in Space

The term ``neuromorphic'' refers to systems that are closely resembling the architecture and/or the dynamics of biological neural networks. Typical examples are novel computer chips designed to mimic the architecture of a biological brain, or sensors that get inspiration from, e.g., the visual or olfactory systems in insects and mammals to acquire information about the environment. This approach is not without ambition as it promises to enable engineered devices able to reproduce the level of performance observed in biological organisms -- the main immediate advantage being the efficient use of scarce resources, which translates into low power requirements. The emphasis on low power and energy efficiency of neuromorphic devices is a perfect match for space applications. Spacecraft -- especially miniaturized ones -- have strict energy constraints as they need to operate in an environment which is scarce with resources and extremely hostile. In this work we present an overview of early attempts made to study a neuromorphic approach in a space context at the European Space Agency's (ESA) Advanced Concepts Team (ACT).

Viaarxiv icon

Globally Optimal Event-Based Divergence Estimation for Ventral Landing

Sep 27, 2022
Sofia McLeod, Gabriele Meoni, Dario Izzo, Anne Mergy, Daqi Liu, Yasir Latif, Ian Reid, Tat-Jun Chin

Figure 1 for Globally Optimal Event-Based Divergence Estimation for Ventral Landing
Figure 2 for Globally Optimal Event-Based Divergence Estimation for Ventral Landing

Event sensing is a major component in bio-inspired flight guidance and control systems. We explore the usage of event cameras for predicting time-to-contact (TTC) with the surface during ventral landing. This is achieved by estimating divergence (inverse TTC), which is the rate of radial optic flow, from the event stream generated during landing. Our core contributions are a novel contrast maximisation formulation for event-based divergence estimation, and a branch-and-bound algorithm to exactly maximise contrast and find the optimal divergence value. GPU acceleration is conducted to speed up the global algorithm. Another contribution is a new dataset containing real event streams from ventral landing that was employed to test and benchmark our method. Owing to global optimisation, our algorithm is much more capable at recovering the true divergence, compared to other heuristic divergence estimators or event-based optic flow methods. With GPU acceleration, our method also achieves competitive runtimes.

* Accepted in the ECCV 2022 workshop on AI for Space, 18 pages, 6 figures 
Viaarxiv icon

MSMatch: Semi-Supervised Multispectral Scene Classification with Few Labels

Mar 18, 2021
Pablo Gómez, Gabriele Meoni

Figure 1 for MSMatch: Semi-Supervised Multispectral Scene Classification with Few Labels
Figure 2 for MSMatch: Semi-Supervised Multispectral Scene Classification with Few Labels
Figure 3 for MSMatch: Semi-Supervised Multispectral Scene Classification with Few Labels
Figure 4 for MSMatch: Semi-Supervised Multispectral Scene Classification with Few Labels

Supervised learning techniques are at the center of many tasks in remote sensing. Unfortunately, these methods, especially recent deep learning methods, often require large amounts of labeled data for training. Even though satellites acquire large amounts of data, labeling the data is often tedious, expensive and requires expert knowledge. Hence, improved methods that require fewer labeled samples are needed. We present MSMatch, the first semi-supervised learning approach competitive with supervised methods on scene classification on the EuroSAT benchmark dataset. We test both RGB and multispectral images and perform various ablation studies to identify the critical parts of the model. The trained neural network achieves state-of-the-art results on EuroSAT with an accuracy that is between 1.98% and 19.76% better than previous methods depending on the number of labeled training examples. With just five labeled examples per class we reach 94.53% and 95.86% accuracy on the EuroSAT RGB and multispectral datasets, respectively. With 50 labels per class we reach 97.62% and 98.23% accuracy. Our results show that MSMatch is capable of greatly reducing the requirements for labeled data. It translates well to multispectral data and should enable various applications that are currently infeasible due to a lack of labeled data. We provide the source code of MSMatch online to enable easy reproduction and quick adoption.

* 6 pages, 4 figures, submitted to IEEE Transactions on Geoscience and Remote Sensing 
Viaarxiv icon