Alert button
Picture for Vivien Sainte Fare Garnot

Vivien Sainte Fare Garnot

Alert button

Accuracy and Consistency of Space-based Vegetation Height Maps for Forest Dynamics in Alpine Terrain

Sep 04, 2023
Yuchang Jiang, Marius Rüetschi, Vivien Sainte Fare Garnot, Mauro Marty, Konrad Schindler, Christian Ginzler, Jan D. Wegner

Monitoring and understanding forest dynamics is essential for environmental conservation and management. This is why the Swiss National Forest Inventory (NFI) provides countrywide vegetation height maps at a spatial resolution of 0.5 m. Its long update time of 6 years, however, limits the temporal analysis of forest dynamics. This can be improved by using spaceborne remote sensing and deep learning to generate large-scale vegetation height maps in a cost-effective way. In this paper, we present an in-depth analysis of these methods for operational application in Switzerland. We generate annual, countrywide vegetation height maps at a 10-meter ground sampling distance for the years 2017 to 2020 based on Sentinel-2 satellite imagery. In comparison to previous works, we conduct a large-scale and detailed stratified analysis against a precise Airborne Laser Scanning reference dataset. This stratified analysis reveals a close relationship between the model accuracy and the topology, especially slope and aspect. We assess the potential of deep learning-derived height maps for change detection and find that these maps can indicate changes as small as 250 $m^2$. Larger-scale changes caused by a winter storm are detected with an F1-score of 0.77. Our results demonstrate that vegetation height maps computed from satellite imagery with deep learning are a valuable, complementary, cost-effective source of evidence to increase the temporal resolution for national forest assessments.

Viaarxiv icon

Mixture of Experts with Uncertainty Voting for Imbalanced Deep Regression Problems

May 24, 2023
Yuchang Jiang, Vivien Sainte Fare Garnot, Konrad Schindler, Jan Dirk Wegner

Figure 1 for Mixture of Experts with Uncertainty Voting for Imbalanced Deep Regression Problems
Figure 2 for Mixture of Experts with Uncertainty Voting for Imbalanced Deep Regression Problems
Figure 3 for Mixture of Experts with Uncertainty Voting for Imbalanced Deep Regression Problems
Figure 4 for Mixture of Experts with Uncertainty Voting for Imbalanced Deep Regression Problems

Data imbalance is ubiquitous when applying machine learning to real-world problems, particularly regression problems. If training data are imbalanced, the learning is dominated by the densely covered regions of the target distribution, consequently, the learned regressor tends to exhibit poor performance in sparsely covered regions. Beyond standard measures like over-sampling or re-weighting, there are two main directions to handle learning from imbalanced data. For regression, recent work relies on the continuity of the distribution; whereas for classification there has been a trend to employ mixture-of-expert models and let some ensemble members specialize in predictions for the sparser regions. Here, we adapt the mixture-of-experts approach to the regression setting. A main question when using this approach is how to fuse the predictions from multiple experts into one output. Drawing inspiration from recent work on probabilistic deep learning, we propose to base the fusion on the aleatoric uncertainties of individual experts, thus obviating the need for a separate aggregation module. In our method, dubbed MOUV, each expert predicts not only an output value but also its uncertainty, which in turn serves as a statistically motivated criterion to rely on the right experts. We compare our method with existing alternatives on multiple public benchmarks and show that MOUV consistently outperforms the prior art, while at the same time producing better calibrated uncertainty estimates. Our code is available at link-upon-publication.

Viaarxiv icon

U-TILISE: A Sequence-to-sequence Model for Cloud Removal in Optical Satellite Time Series

May 22, 2023
Corinne Stucker, Vivien Sainte Fare Garnot, Konrad Schindler

Figure 1 for U-TILISE: A Sequence-to-sequence Model for Cloud Removal in Optical Satellite Time Series
Figure 2 for U-TILISE: A Sequence-to-sequence Model for Cloud Removal in Optical Satellite Time Series
Figure 3 for U-TILISE: A Sequence-to-sequence Model for Cloud Removal in Optical Satellite Time Series
Figure 4 for U-TILISE: A Sequence-to-sequence Model for Cloud Removal in Optical Satellite Time Series

Satellite image time series in the optical and infrared spectrum suffer from frequent data gaps due to cloud cover, cloud shadows, and temporary sensor outages. It has been a long-standing problem of remote sensing research how to best reconstruct the missing pixel values and obtain complete, cloud-free image sequences. We approach that problem from the perspective of representation learning and develop U-TILISE, an efficient neural model that is able to implicitly capture spatio-temporal patterns of the spectral intensities, and that can therefore be trained to map a cloud-masked input sequence to a cloud-free output sequence. The model consists of a convolutional spatial encoder that maps each individual frame of the input sequence to a latent encoding; an attention-based temporal encoder that captures dependencies between those per-frame encodings and lets them exchange information along the time dimension; and a convolutional spatial decoder that decodes the latent embeddings back into multi-spectral images. We experimentally evaluate the proposed model on EarthNet2021, a dataset of Sentinel-2 time series acquired all over Europe, and demonstrate its superior ability to reconstruct the missing pixels. Compared to a standard interpolation baseline, it increases the PSNR by 1.8 dB at previously seen locations and by 1.3 dB at unseen locations.

Viaarxiv icon

UnCRtainTS: Uncertainty Quantification for Cloud Removal in Optical Satellite Time Series

Apr 11, 2023
Patrick Ebel, Vivien Sainte Fare Garnot, Michael Schmitt, Jan Dirk Wegner, Xiao Xiang Zhu

Figure 1 for UnCRtainTS: Uncertainty Quantification for Cloud Removal in Optical Satellite Time Series
Figure 2 for UnCRtainTS: Uncertainty Quantification for Cloud Removal in Optical Satellite Time Series
Figure 3 for UnCRtainTS: Uncertainty Quantification for Cloud Removal in Optical Satellite Time Series
Figure 4 for UnCRtainTS: Uncertainty Quantification for Cloud Removal in Optical Satellite Time Series

Clouds and haze often occlude optical satellite images, hindering continuous, dense monitoring of the Earth's surface. Although modern deep learning methods can implicitly learn to ignore such occlusions, explicit cloud removal as pre-processing enables manual interpretation and allows training models when only few annotations are available. Cloud removal is challenging due to the wide range of occlusion scenarios -- from scenes partially visible through haze, to completely opaque cloud coverage. Furthermore, integrating reconstructed images in downstream applications would greatly benefit from trustworthy quality assessment. In this paper, we introduce UnCRtainTS, a method for multi-temporal cloud removal combining a novel attention-based architecture, and a formulation for multivariate uncertainty prediction. These two components combined set a new state-of-the-art performance in terms of image reconstruction on two public cloud removal datasets. Additionally, we show how the well-calibrated predicted uncertainties enable a precise control of the reconstruction quality.

Viaarxiv icon

Multi-Modal Temporal Attention Models for Crop Mapping from Satellite Time Series

Dec 14, 2021
Vivien Sainte Fare Garnot, Loic Landrieu, Nesrine Chehata

Figure 1 for Multi-Modal Temporal Attention Models for Crop Mapping from Satellite Time Series
Figure 2 for Multi-Modal Temporal Attention Models for Crop Mapping from Satellite Time Series
Figure 3 for Multi-Modal Temporal Attention Models for Crop Mapping from Satellite Time Series
Figure 4 for Multi-Modal Temporal Attention Models for Crop Mapping from Satellite Time Series

Optical and radar satellite time series are synergetic: optical images contain rich spectral information, while C-band radar captures useful geometrical information and is immune to cloud cover. Motivated by the recent success of temporal attention-based methods across multiple crop mapping tasks, we propose to investigate how these models can be adapted to operate on several modalities. We implement and evaluate multiple fusion schemes, including a novel approach and simple adjustments to the training procedure, significantly improving performance and efficiency with little added complexity. We show that most fusion schemes have advantages and drawbacks, making them relevant for specific settings. We then evaluate the benefit of multimodality across several tasks: parcel classification, pixel-based segmentation, and panoptic parcel segmentation. We show that by leveraging both optical and radar time series, multimodal temporal attention-based models can outmatch single-modality models in terms of performance and resilience to cloud cover. To conduct these experiments, we augment the PASTIS dataset with spatially aligned radar image time series. The resulting dataset, PASTIS-R, constitutes the first large-scale, multimodal, and open-access satellite time series dataset with semantic and instance annotations.

* Under review 
Viaarxiv icon

Panoptic Segmentation of Satellite Image Time Series with Convolutional Temporal Attention Networks

Jul 26, 2021
Vivien Sainte Fare Garnot, Loic Landrieu

Figure 1 for Panoptic Segmentation of Satellite Image Time Series with Convolutional Temporal Attention Networks
Figure 2 for Panoptic Segmentation of Satellite Image Time Series with Convolutional Temporal Attention Networks
Figure 3 for Panoptic Segmentation of Satellite Image Time Series with Convolutional Temporal Attention Networks
Figure 4 for Panoptic Segmentation of Satellite Image Time Series with Convolutional Temporal Attention Networks

Unprecedented access to multi-temporal satellite imagery has opened new perspectives for a variety of Earth observation tasks. Among them, pixel-precise panoptic segmentation of agricultural parcels has major economic and environmental implications. While researchers have explored this problem for single images, we argue that the complex temporal patterns of crop phenology are better addressed with temporal sequences of images. In this paper, we present the first end-to-end, single-stage method for panoptic segmentation of Satellite Image Time Series (SITS). This module can be combined with our novel image sequence encoding network which relies on temporal self-attention to extract rich and adaptive multi-scale spatio-temporal features. We also introduce PASTIS, the first open-access SITS dataset with panoptic annotations. We demonstrate the superiority of our encoder for semantic segmentation against multiple competing architectures, and set up the first state-of-the-art of panoptic segmentation of SITS. Our implementation and PASTIS are publicly available.

* Accepted at ICCV2021, PASTIS Dataset available at https://github.com/VSainteuf/pastis-benchmark, PyTorch implementation at https://github.com/VSainteuf/utae-paps 
Viaarxiv icon

Lightweight Temporal Self-Attention for Classifying Satellite Image Time Series

Jul 08, 2020
Vivien Sainte Fare Garnot, Loic Landrieu

Figure 1 for Lightweight Temporal Self-Attention for Classifying Satellite Image Time Series
Figure 2 for Lightweight Temporal Self-Attention for Classifying Satellite Image Time Series
Figure 3 for Lightweight Temporal Self-Attention for Classifying Satellite Image Time Series
Figure 4 for Lightweight Temporal Self-Attention for Classifying Satellite Image Time Series

The increasing accessibility and precision of Earth observation satellite data offers considerable opportunities for industrial and state actors alike. This calls however for efficient methods able to process time-series on a global scale. Building on recent work employing multi-headed self-attention mechanisms to classify remote sensing time sequences, we propose a modification of the Temporal Attention Encoder. In our network, the channels of the temporal inputs are distributed among several compact attention heads operating in parallel. Each head extracts highly-specialized temporal features which are in turn concatenated into a single representation. Our approach outperforms other state-of-the-art time series classification algorithms on an open-access satellite image dataset, while using significantly fewer parameters and with a reduced computational complexity.

Viaarxiv icon

Metric-Guided Prototype Learning

Jul 06, 2020
Vivien Sainte Fare Garnot, Loic Landrieu

Figure 1 for Metric-Guided Prototype Learning
Figure 2 for Metric-Guided Prototype Learning
Figure 3 for Metric-Guided Prototype Learning
Figure 4 for Metric-Guided Prototype Learning

Not all errors are created equal. This is especially true for many key machine learning applications. In the case of classification tasks, the hierarchy of errors can be summarized under the form of a cost matrix, which assesses the gravity of confusing each pair of classes. When certain conditions are met, this matrix defines a metric, which we use in a new and versatile classification layer to model the disparity of errors. Our method relies on conjointly learning a feature-extracting network and a set of class representations, or prototypes, which incorporate the error metric into their relative arrangement. Our approach allows for consistent improvement of the network's prediction with regard to the cost matrix. Furthermore, when the induced metric contains insight on the data structure, our approach improves the overall precision. Experiments on three different tasks and public datasets -- from agricultural time series classification to depth image semantic segmentation -- validate our approach.

Viaarxiv icon