Alert button
Picture for Elliot Vincent

Elliot Vincent

Alert button

Learnable Earth Parser: Discovering 3D Prototypes in Aerial Scans

Apr 19, 2023
Romain Loiseau, Elliot Vincent, Mathieu Aubry, Loic Landrieu

Figure 1 for Learnable Earth Parser: Discovering 3D Prototypes in Aerial Scans
Figure 2 for Learnable Earth Parser: Discovering 3D Prototypes in Aerial Scans
Figure 3 for Learnable Earth Parser: Discovering 3D Prototypes in Aerial Scans
Figure 4 for Learnable Earth Parser: Discovering 3D Prototypes in Aerial Scans

We propose an unsupervised method for parsing large 3D scans of real-world scenes into interpretable parts. Our goal is to provide a practical tool for analyzing 3D scenes with unique characteristics in the context of aerial surveying and mapping, without relying on application-specific user annotations. Our approach is based on a probabilistic reconstruction model that decomposes an input 3D point cloud into a small set of learned prototypical shapes. Our model provides an interpretable reconstruction of complex scenes and leads to relevant instance and semantic segmentations. To demonstrate the usefulness of our results, we introduce a novel dataset of seven diverse aerial LiDAR scans. We show that our method outperforms state-of-the-art unsupervised methods in terms of decomposition accuracy while remaining visually interpretable. Our method offers significant advantage over existing approaches, as it does not require any manual annotations, making it a practical and efficient tool for 3D scene analysis. Our code and dataset are available at https://imagine.enpc.fr/~loiseaur/learnable-earth-parser

Viaarxiv icon

Pixel-wise Agricultural Image Time Series Classification: Comparisons and a Deformable Prototype-based Approach

Mar 22, 2023
Elliot Vincent, Jean Ponce, Mathieu Aubry

Figure 1 for Pixel-wise Agricultural Image Time Series Classification: Comparisons and a Deformable Prototype-based Approach
Figure 2 for Pixel-wise Agricultural Image Time Series Classification: Comparisons and a Deformable Prototype-based Approach
Figure 3 for Pixel-wise Agricultural Image Time Series Classification: Comparisons and a Deformable Prototype-based Approach
Figure 4 for Pixel-wise Agricultural Image Time Series Classification: Comparisons and a Deformable Prototype-based Approach

Improvements in Earth observation by satellites allow for imagery of ever higher temporal and spatial resolution. Leveraging this data for agricultural monitoring is key for addressing environmental and economic challenges. Current methods for crop segmentation using temporal data either rely on annotated data or are heavily engineered to compensate the lack of supervision. In this paper, we present and compare datasets and methods for both supervised and unsupervised pixel-wise segmentation of satellite image time series (SITS). We also introduce an approach to add invariance to spectral deformations and temporal shifts to classical prototype-based methods such as K-means and Nearest Centroid Classifier (NCC). We show this simple and highly interpretable method leads to meaningful results in both the supervised and unsupervised settings and significantly improves the state of the art for unsupervised classification of agricultural time series on four recent SITS datasets.

Viaarxiv icon

A Model You Can Hear: Audio Identification with Playable Prototypes

Aug 05, 2022
Romain Loiseau, Baptiste Bouvier, Yann Teytaut, Elliot Vincent, Mathieu Aubry, Loic Landrieu

Figure 1 for A Model You Can Hear: Audio Identification with Playable Prototypes
Figure 2 for A Model You Can Hear: Audio Identification with Playable Prototypes
Figure 3 for A Model You Can Hear: Audio Identification with Playable Prototypes
Figure 4 for A Model You Can Hear: Audio Identification with Playable Prototypes

Machine learning techniques have proved useful for classifying and analyzing audio content. However, recent methods typically rely on abstract and high-dimensional representations that are difficult to interpret. Inspired by transformation-invariant approaches developed for image and 3D data, we propose an audio identification model based on learnable spectral prototypes. Equipped with dedicated transformation networks, these prototypes can be used to cluster and classify input audio samples from large collections of sounds. Our model can be trained with or without supervision and reaches state-of-the-art results for speaker and instrument identification, while remaining easily interpretable. The code is available at: https://github.com/romainloiseau/a-model-you-can-hear

Viaarxiv icon

Unsupervised Layered Image Decomposition into Object Prototypes

Apr 29, 2021
Tom Monnier, Elliot Vincent, Jean Ponce, Mathieu Aubry

Figure 1 for Unsupervised Layered Image Decomposition into Object Prototypes
Figure 2 for Unsupervised Layered Image Decomposition into Object Prototypes
Figure 3 for Unsupervised Layered Image Decomposition into Object Prototypes
Figure 4 for Unsupervised Layered Image Decomposition into Object Prototypes

We present an unsupervised learning framework for decomposing images into layers of automatically discovered object models. Contrary to recent approaches that model image layers with autoencoder networks, we represent them as explicit transformations of a small set of prototypical images. Our model has three main components: (i) a set of object prototypes in the form of learnable images with a transparency channel, which we refer to as sprites; (ii) differentiable parametric functions predicting occlusions and transformation parameters necessary to instantiate the sprites in a given image; (iii) a layered image formation model with occlusion for compositing these instances into complete images including background. By jointly learning the sprites and occlusion/transformation predictors to reconstruct images, our approach not only yields accurate layered image decompositions, but also identifies object categories and instance parameters. We first validate our approach by providing results on par with the state of the art on standard multi-object synthetic benchmarks (Tetrominoes, Multi-dSprites, CLEVR6). We then demonstrate the applicability of our model to real images in tasks that include clustering (SVHN, GTSRB), cosegmentation (Weizmann Horse) and object discovery from unfiltered social network images. To the best of our knowledge, our approach is the first layered image decomposition algorithm that learns an explicit and shared concept of object type, and is robust enough to be applied to real images.

* Project webpage: https://imagine.enpc.fr/~monniert/DTI-Sprites 
Viaarxiv icon