Alert button
Picture for Mathieu Aubry

Mathieu Aubry

Alert button

Differentiable Blocks World: Qualitative 3D Decomposition by Rendering Primitives

Jul 11, 2023
Tom Monnier, Jake Austin, Angjoo Kanazawa, Alexei A. Efros, Mathieu Aubry

Figure 1 for Differentiable Blocks World: Qualitative 3D Decomposition by Rendering Primitives
Figure 2 for Differentiable Blocks World: Qualitative 3D Decomposition by Rendering Primitives
Figure 3 for Differentiable Blocks World: Qualitative 3D Decomposition by Rendering Primitives
Figure 4 for Differentiable Blocks World: Qualitative 3D Decomposition by Rendering Primitives

Given a set of calibrated images of a scene, we present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives. While many approaches focus on recovering high-fidelity 3D scenes, we focus on parsing a scene into mid-level 3D representations made of a small set of textured primitives. Such representations are interpretable, easy to manipulate and suited for physics-based simulations. Moreover, unlike existing primitive decomposition methods that rely on 3D input data, our approach operates directly on images through differentiable rendering. Specifically, we model primitives as textured superquadric meshes and optimize their parameters from scratch with an image rendering loss. We highlight the importance of modeling transparency for each primitive, which is critical for optimization and also enables handling varying numbers of primitives. We show that the resulting textured primitives faithfully reconstruct the input images and accurately model the visible 3D points, while providing amodal shape completions of unseen object regions. We compare our approach to the state of the art on diverse scenes from DTU, and demonstrate its robustness on real-life captures from BlendedMVS and Nerfstudio. We also showcase how our results can be used to effortlessly edit a scene or perform physical simulations. Code and video results are available at https://www.tmonnier.com/DBW .

* Project webpage with code and videos: https://www.tmonnier.com/DBW 
Viaarxiv icon

Learnable Earth Parser: Discovering 3D Prototypes in Aerial Scans

Apr 19, 2023
Romain Loiseau, Elliot Vincent, Mathieu Aubry, Loic Landrieu

Figure 1 for Learnable Earth Parser: Discovering 3D Prototypes in Aerial Scans
Figure 2 for Learnable Earth Parser: Discovering 3D Prototypes in Aerial Scans
Figure 3 for Learnable Earth Parser: Discovering 3D Prototypes in Aerial Scans
Figure 4 for Learnable Earth Parser: Discovering 3D Prototypes in Aerial Scans

We propose an unsupervised method for parsing large 3D scans of real-world scenes into interpretable parts. Our goal is to provide a practical tool for analyzing 3D scenes with unique characteristics in the context of aerial surveying and mapping, without relying on application-specific user annotations. Our approach is based on a probabilistic reconstruction model that decomposes an input 3D point cloud into a small set of learned prototypical shapes. Our model provides an interpretable reconstruction of complex scenes and leads to relevant instance and semantic segmentations. To demonstrate the usefulness of our results, we introduce a novel dataset of seven diverse aerial LiDAR scans. We show that our method outperforms state-of-the-art unsupervised methods in terms of decomposition accuracy while remaining visually interpretable. Our method offers significant advantage over existing approaches, as it does not require any manual annotations, making it a practical and efficient tool for 3D scene analysis. Our code and dataset are available at https://imagine.enpc.fr/~loiseaur/learnable-earth-parser

Viaarxiv icon

Pixel-wise Agricultural Image Time Series Classification: Comparisons and a Deformable Prototype-based Approach

Mar 22, 2023
Elliot Vincent, Jean Ponce, Mathieu Aubry

Figure 1 for Pixel-wise Agricultural Image Time Series Classification: Comparisons and a Deformable Prototype-based Approach
Figure 2 for Pixel-wise Agricultural Image Time Series Classification: Comparisons and a Deformable Prototype-based Approach
Figure 3 for Pixel-wise Agricultural Image Time Series Classification: Comparisons and a Deformable Prototype-based Approach
Figure 4 for Pixel-wise Agricultural Image Time Series Classification: Comparisons and a Deformable Prototype-based Approach

Improvements in Earth observation by satellites allow for imagery of ever higher temporal and spatial resolution. Leveraging this data for agricultural monitoring is key for addressing environmental and economic challenges. Current methods for crop segmentation using temporal data either rely on annotated data or are heavily engineered to compensate the lack of supervision. In this paper, we present and compare datasets and methods for both supervised and unsupervised pixel-wise segmentation of satellite image time series (SITS). We also introduce an approach to add invariance to spectral deformations and temporal shifts to classical prototype-based methods such as K-means and Nearest Centroid Classifier (NCC). We show this simple and highly interpretable method leads to meaningful results in both the supervised and unsupervised settings and significantly improves the state of the art for unsupervised classification of agricultural time series on four recent SITS datasets.

Viaarxiv icon

The Learnable Typewriter: A Generative Approach to Text Line Analysis

Feb 06, 2023
Ioannis Siglidis, Nicolas Gonthier, Julien Gaubil, Tom Monnier, Mathieu Aubry

Figure 1 for The Learnable Typewriter: A Generative Approach to Text Line Analysis
Figure 2 for The Learnable Typewriter: A Generative Approach to Text Line Analysis
Figure 3 for The Learnable Typewriter: A Generative Approach to Text Line Analysis
Figure 4 for The Learnable Typewriter: A Generative Approach to Text Line Analysis

We present a generative document-specific approach to character analysis and recognition in text lines. Our main idea is to build on unsupervised multi-object segmentation methods and in particular those that reconstruct images based on a limited amount of visual elements, called sprites. Our approach can learn a large number of different characters and leverage line-level annotations when available. Our contribution is twofold. First, we provide the first adaptation and evaluation of a deep unsupervised multi-object segmentation approach for text line analysis. Since these methods have mainly been evaluated on synthetic data in a completely unsupervised setting, demonstrating that they can be adapted and quantitatively evaluated on real text images and that they can be trained using weak supervision are significant progresses. Second, we demonstrate the potential of our method for new applications, more specifically in the field of paleography, which studies the history and variations of handwriting, and for cipher analysis. We evaluate our approach on three very different datasets: a printed volume of the Google1000 dataset, the Copiale cipher and historical handwritten charters from the 12th and early 13th century.

* For the code and a quick-overview visit the project webpage at http://imagine.enpc.fr/~siglidii/learnable-typewriter 
Viaarxiv icon

MegaPose: 6D Pose Estimation of Novel Objects via Render & Compare

Dec 13, 2022
Yann Labbé, Lucas Manuelli, Arsalan Mousavian, Stephen Tyree, Stan Birchfield, Jonathan Tremblay, Justin Carpentier, Mathieu Aubry, Dieter Fox, Josef Sivic

Figure 1 for MegaPose: 6D Pose Estimation of Novel Objects via Render & Compare
Figure 2 for MegaPose: 6D Pose Estimation of Novel Objects via Render & Compare
Figure 3 for MegaPose: 6D Pose Estimation of Novel Objects via Render & Compare
Figure 4 for MegaPose: 6D Pose Estimation of Novel Objects via Render & Compare

We introduce MegaPose, a method to estimate the 6D pose of novel objects, that is, objects unseen during training. At inference time, the method only assumes knowledge of (i) a region of interest displaying the object in the image and (ii) a CAD model of the observed object. The contributions of this work are threefold. First, we present a 6D pose refiner based on a render&compare strategy which can be applied to novel objects. The shape and coordinate system of the novel object are provided as inputs to the network by rendering multiple synthetic views of the object's CAD model. Second, we introduce a novel approach for coarse pose estimation which leverages a network trained to classify whether the pose error between a synthetic rendering and an observed image of the same object can be corrected by the refiner. Third, we introduce a large-scale synthetic dataset of photorealistic images of thousands of objects with diverse visual and shape properties and show that this diversity is crucial to obtain good generalization performance on novel objects. We train our approach on this large synthetic dataset and apply it without retraining to hundreds of novel objects in real images from several pose estimation benchmarks. Our approach achieves state-of-the-art performance on the ModelNet and YCB-Video datasets. An extensive evaluation on the 7 core datasets of the BOP challenge demonstrates that our approach achieves performance competitive with existing approaches that require access to the target objects during training. Code, dataset and trained models are available on the project page: https://megapose6d.github.io/.

* CoRL 2022 
Viaarxiv icon

A Model You Can Hear: Audio Identification with Playable Prototypes

Aug 05, 2022
Romain Loiseau, Baptiste Bouvier, Yann Teytaut, Elliot Vincent, Mathieu Aubry, Loic Landrieu

Figure 1 for A Model You Can Hear: Audio Identification with Playable Prototypes
Figure 2 for A Model You Can Hear: Audio Identification with Playable Prototypes
Figure 3 for A Model You Can Hear: Audio Identification with Playable Prototypes
Figure 4 for A Model You Can Hear: Audio Identification with Playable Prototypes

Machine learning techniques have proved useful for classifying and analyzing audio content. However, recent methods typically rely on abstract and high-dimensional representations that are difficult to interpret. Inspired by transformation-invariant approaches developed for image and 3D data, we propose an audio identification model based on learnable spectral prototypes. Equipped with dedicated transformation networks, these prototypes can be used to cluster and classify input audio samples from large collections of sounds. Our model can be trained with or without supervision and reaches state-of-the-art results for speaker and instrument identification, while remaining easily interpretable. The code is available at: https://github.com/romainloiseau/a-model-you-can-hear

Viaarxiv icon

Online Segmentation of LiDAR Sequences: Dataset and Algorithm

Jun 16, 2022
Romain Loiseau, Mathieu Aubry, Loïc Landrieu

Figure 1 for Online Segmentation of LiDAR Sequences: Dataset and Algorithm
Figure 2 for Online Segmentation of LiDAR Sequences: Dataset and Algorithm
Figure 3 for Online Segmentation of LiDAR Sequences: Dataset and Algorithm
Figure 4 for Online Segmentation of LiDAR Sequences: Dataset and Algorithm

Roof-mounted spinning LiDAR sensors are widely used by autonomous vehicles, driving the need for real-time processing of 3D point sequences. However, most LiDAR semantic segmentation datasets and algorithms split these acquisitions into $360^\circ$ frames, leading to acquisition latency that is incompatible with realistic real-time applications and evaluations. We address this issue with two key contributions. First, we introduce HelixNet, a $10$ billion point dataset with fine-grained labels, timestamps, and sensor rotation information that allows an accurate assessment of real-time readiness of segmentation algorithms. Second, we propose Helix4D, a compact and efficient spatio-temporal transformer architecture specifically designed for rotating LiDAR point sequences. Helix4D operates on acquisition slices that correspond to a fraction of a full rotation of the sensor, significantly reducing the total latency. We present an extensive benchmark of the performance and real-time readiness of several state-of-the-art models on HelixNet and SemanticKITTI. Helix4D reaches accuracy on par with the best segmentation algorithms with a reduction of more than $5\times$ in terms of latency and $50\times$ in model size. Code and data are available at: https://romainloiseau.fr/helixnet

* Code and data are available at: https://romainloiseau.fr/helixnet 
Viaarxiv icon

Learning Joint Surface Atlases

Jun 13, 2022
Theo Deprelle, Thibault Groueix, Noam Aigerman, Vladimir G. Kim, Mathieu Aubry

Figure 1 for Learning Joint Surface Atlases
Figure 2 for Learning Joint Surface Atlases
Figure 3 for Learning Joint Surface Atlases
Figure 4 for Learning Joint Surface Atlases

This paper describes new techniques for learning atlas-like representations of 3D surfaces, i.e. homeomorphic transformations from a 2D domain to surfaces. Compared to prior work, we propose two major contributions. First, instead of mapping a fixed 2D domain, such as a set of square patches, to the surface, we learn a continuous 2D domain with arbitrary topology by optimizing a point sampling distribution represented as a mixture of Gaussians. Second, we learn consistent mappings in both directions: charts, from the 3D surface to 2D domain, and parametrizations, their inverse. We demonstrate that this improves the quality of the learned surface representation, as well as its consistency in a collection of related shapes. It thus leads to improvements for applications such as correspondence estimation, texture transfer, and consistent UV mapping. As an additional technical contribution, we outline that, while incorporating normal consistency has clear benefits, it leads to issues in the optimization, and that these issues can be mitigated using a simple repulsive regularization. We demonstrate that our contributions provide better surface representation than existing baselines.

Viaarxiv icon

Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency

Apr 21, 2022
Tom Monnier, Matthew Fisher, Alexei A. Efros, Mathieu Aubry

Figure 1 for Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency
Figure 2 for Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency
Figure 3 for Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency
Figure 4 for Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency

Approaches to single-view reconstruction typically rely on viewpoint annotations, silhouettes, the absence of background, multiple views of the same instance, a template shape, or symmetry. We avoid all of these supervisions and hypotheses by leveraging explicitly the consistency between images of different object instances. As a result, our method can learn from large collections of unlabelled images depicting the same object category. Our main contributions are two approaches to leverage cross-instance consistency: (i) progressive conditioning, a training strategy to gradually specialize the model from category to instances in a curriculum learning fashion; (ii) swap reconstruction, a loss enforcing consistency between instances having similar shape or texture. Critical to the success of our method are also: our structured autoencoding architecture decomposing an image into explicit shape, texture, pose, and background; an adapted formulation of differential rendering, and; a new optimization scheme alternating between 3D and pose learning. We compare our approach, UNICORN, both on the diverse synthetic ShapeNet dataset - the classical benchmark for methods requiring multiple views as supervision - and on standard real-image benchmarks (Pascal3D+ Car, CUB-200) for which most methods require known templates and silhouette annotations. We also showcase applicability to more challenging real-world collections (CompCars, LSUN), where silhouettes are not available and images are not cropped around the object.

* Project webpage with code and videos: http://imagine.enpc.fr/~monniert/UNICORN/ 
Viaarxiv icon

Focal Length and Object Pose Estimation via Render and Compare

Apr 11, 2022
Georgy Ponimatkin, Yann Labbé, Bryan Russell, Mathieu Aubry, Josef Sivic

Figure 1 for Focal Length and Object Pose Estimation via Render and Compare
Figure 2 for Focal Length and Object Pose Estimation via Render and Compare
Figure 3 for Focal Length and Object Pose Estimation via Render and Compare
Figure 4 for Focal Length and Object Pose Estimation via Render and Compare

We introduce FocalPose, a neural render-and-compare method for jointly estimating the camera-object 6D pose and camera focal length given a single RGB input image depicting a known object. The contributions of this work are twofold. First, we derive a focal length update rule that extends an existing state-of-the-art render-and-compare 6D pose estimator to address the joint estimation task. Second, we investigate several different loss functions for jointly estimating the object pose and focal length. We find that a combination of direct focal length regression with a reprojection loss disentangling the contribution of translation, rotation, and focal length leads to improved results. We show results on three challenging benchmark datasets that depict known 3D models in uncontrolled settings. We demonstrate that our focal length and 6D pose estimates have lower error than the existing state-of-the-art methods.

* Accepted to CVPR2022. Code available at http://github.com/ponimatkin/focalpose 
Viaarxiv icon