Alert button
Picture for Thibault Groueix

Thibault Groueix

Alert button

CNOS: A Strong Baseline for CAD-based Novel Object Segmentation

Aug 03, 2023
Van Nguyen Nguyen, Tomas Hodan, Georgy Ponimatkin, Thibault Groueix, Vincent Lepetit

We propose a simple three-stage approach to segment unseen objects in RGB images using their CAD models. Leveraging recent powerful foundation models, DINOv2 and Segment Anything, we create descriptors and generate proposals, including binary masks for a given input RGB image. By matching proposals with reference descriptors created from CAD models, we achieve precise object ID assignment along with modal masks. We experimentally demonstrate that our method achieves state-of-the-art results in CAD-based novel object segmentation, surpassing existing approaches on the seven core datasets of the BOP challenge by 19.8% AP using the same BOP evaluation protocol. Our source code is available at https://github.com/nv-nguyen/cnos.

Viaarxiv icon

PSDR-Room: Single Photo to Scene using Differentiable Rendering

Jul 06, 2023
Kai Yan, Fujun Luan, MiloŠ HaŠAn, Thibault Groueix, Valentin Deschaintre, Shuang Zhao

Figure 1 for PSDR-Room: Single Photo to Scene using Differentiable Rendering
Figure 2 for PSDR-Room: Single Photo to Scene using Differentiable Rendering
Figure 3 for PSDR-Room: Single Photo to Scene using Differentiable Rendering
Figure 4 for PSDR-Room: Single Photo to Scene using Differentiable Rendering

A 3D digital scene contains many components: lights, materials and geometries, interacting to reach the desired appearance. Staging such a scene is time-consuming and requires both artistic and technical skills. In this work, we propose PSDR-Room, a system allowing to optimize lighting as well as the pose and materials of individual objects to match a target image of a room scene, with minimal user input. To this end, we leverage a recent path-space differentiable rendering approach that provides unbiased gradients of the rendering with respect to geometry, lighting, and procedural materials, allowing us to optimize all of these components using gradient descent to visually match the input photo appearance. We use recent single-image scene understanding methods to initialize the optimization and search for appropriate 3D models and materials. We evaluate our method on real photographs of indoor scenes and demonstrate the editability of the resulting scene components.

Viaarxiv icon

Neural Face Rigging for Animating and Retargeting Facial Meshes in the Wild

May 15, 2023
Dafei Qin, Jun Saito, Noam Aigerman, Thibault Groueix, Taku Komura

Figure 1 for Neural Face Rigging for Animating and Retargeting Facial Meshes in the Wild
Figure 2 for Neural Face Rigging for Animating and Retargeting Facial Meshes in the Wild
Figure 3 for Neural Face Rigging for Animating and Retargeting Facial Meshes in the Wild
Figure 4 for Neural Face Rigging for Animating and Retargeting Facial Meshes in the Wild

We propose an end-to-end deep-learning approach for automatic rigging and retargeting of 3D models of human faces in the wild. Our approach, called Neural Face Rigging (NFR), holds three key properties: (i) NFR's expression space maintains human-interpretable editing parameters for artistic controls; (ii) NFR is readily applicable to arbitrary facial meshes with different connectivity and expressions; (iii) NFR can encode and produce fine-grained details of complex expressions performed by arbitrary subjects. To the best of our knowledge, NFR is the first approach to provide realistic and controllable deformations of in-the-wild facial meshes, without the manual creation of blendshapes or correspondence. We design a deformation autoencoder and train it through a multi-dataset training scheme, which benefits from the unique advantages of two data sources: a linear 3DMM with interpretable control parameters as in FACS, and 4D captures of real faces with fine-grained details. Through various experiments, we show NFR's ability to automatically produce realistic and accurate facial deformations across a wide range of existing datasets as well as noisy facial scans in-the-wild, while providing artist-controlled, editable parameters.

* SIGGRAPH 2023(Conference Track), 13 pages, 15 figures 
Viaarxiv icon

TextDeformer: Geometry Manipulation using Text Guidance

Apr 26, 2023
William Gao, Noam Aigerman, Thibault Groueix, Vladimir G. Kim, Rana Hanocka

Figure 1 for TextDeformer: Geometry Manipulation using Text Guidance
Figure 2 for TextDeformer: Geometry Manipulation using Text Guidance
Figure 3 for TextDeformer: Geometry Manipulation using Text Guidance
Figure 4 for TextDeformer: Geometry Manipulation using Text Guidance

We present a technique for automatically producing a deformation of an input triangle mesh, guided solely by a text prompt. Our framework is capable of deformations that produce both large, low-frequency shape changes, and small high-frequency details. Our framework relies on differentiable rendering to connect geometry to powerful pre-trained image encoders, such as CLIP and DINO. Notably, updating mesh geometry by taking gradient steps through differentiable rendering is notoriously challenging, commonly resulting in deformed meshes with significant artifacts. These difficulties are amplified by noisy and inconsistent gradients from CLIP. To overcome this limitation, we opt to represent our mesh deformation through Jacobians, which updates deformations in a global, smooth manner (rather than locally-sub-optimal steps). Our key observation is that Jacobians are a representation that favors smoother, large deformations, leading to a global relation between vertices and pixels, and avoiding localized noisy gradients. Additionally, to ensure the resulting shape is coherent from all 3D viewpoints, we encourage the deep features computed on the 2D encoding of the rendering to be consistent for a given vertex from all viewpoints. We demonstrate that our method is capable of smoothly-deforming a wide variety of source mesh and target text prompts, achieving both large modifications to, e.g., body proportions of animals, as well as adding fine semantic details, such as shoe laces on an army boot and fine details of a face.

Viaarxiv icon

NOPE: Novel Object Pose Estimation from a Single Image

Mar 23, 2023
Van Nguyen Nguyen, Thibault Groueix, Yinlin Hu, Mathieu Salzmann, Vincent Lepetit

Figure 1 for NOPE: Novel Object Pose Estimation from a Single Image
Figure 2 for NOPE: Novel Object Pose Estimation from a Single Image
Figure 3 for NOPE: Novel Object Pose Estimation from a Single Image
Figure 4 for NOPE: Novel Object Pose Estimation from a Single Image

The practicality of 3D object pose estimation remains limited for many applications due to the need for prior knowledge of a 3D model and a training period for new objects. To address this limitation, we propose an approach that takes a single image of a new object as input and predicts the relative pose of this object in new images without prior knowledge of the object's 3D model and without requiring training time for new objects and categories. We achieve this by training a model to directly predict discriminative embeddings for viewpoints surrounding the object. This prediction is done using a simple U-Net architecture with attention and conditioned on the desired pose, which yields extremely fast inference. We compare our approach to state-of-the-art methods and show it outperforms them both in terms of accuracy and robustness. Our source code is publicly available at https://github.com/nv-nguyen/nope

Viaarxiv icon

PoseBERT: A Generic Transformer Module for Temporal 3D Human Modeling

Aug 22, 2022
Fabien Baradel, Romain Brégier, Thibault Groueix, Philippe Weinzaepfel, Yannis Kalantidis, Grégory Rogez

Figure 1 for PoseBERT: A Generic Transformer Module for Temporal 3D Human Modeling
Figure 2 for PoseBERT: A Generic Transformer Module for Temporal 3D Human Modeling
Figure 3 for PoseBERT: A Generic Transformer Module for Temporal 3D Human Modeling
Figure 4 for PoseBERT: A Generic Transformer Module for Temporal 3D Human Modeling

Training state-of-the-art models for human pose estimation in videos requires datasets with annotations that are really hard and expensive to obtain. Although transformers have been recently utilized for body pose sequence modeling, related methods rely on pseudo-ground truth to augment the currently limited training data available for learning such models. In this paper, we introduce PoseBERT, a transformer module that is fully trained on 3D Motion Capture (MoCap) data via masked modeling. It is simple, generic and versatile, as it can be plugged on top of any image-based model to transform it in a video-based model leveraging temporal information. We showcase variants of PoseBERT with different inputs varying from 3D skeleton keypoints to rotations of a 3D parametric model for either the full body (SMPL) or just the hands (MANO). Since PoseBERT training is task agnostic, the model can be applied to several tasks such as pose refinement, future pose prediction or motion completion without finetuning. Our experimental results validate that adding PoseBERT on top of various state-of-the-art pose estimation methods consistently improves their performances, while its low computational cost allows us to use it in a real-time demo for smoothly animating a robotic hand via a webcam. Test code and models are available at https://github.com/naver/posebert.

Viaarxiv icon

Leveraging Monocular Disparity Estimation for Single-View Reconstruction

Jul 01, 2022
Marissa Ramirez de Chanlatte, Matheus Gadelha, Thibault Groueix, Radomir Mech

Figure 1 for Leveraging Monocular Disparity Estimation for Single-View Reconstruction
Figure 2 for Leveraging Monocular Disparity Estimation for Single-View Reconstruction
Figure 3 for Leveraging Monocular Disparity Estimation for Single-View Reconstruction
Figure 4 for Leveraging Monocular Disparity Estimation for Single-View Reconstruction

We present a fine-tuning method to improve the appearance of 3D geometries reconstructed from single images. We leverage advances in monocular depth estimation to obtain disparity maps and present a novel approach to transforming 2D normalized disparity maps into 3D point clouds by solving an optimization on the relevant camera parameters, After creating a 3D point cloud from disparity, we introduce a method to combine the new point cloud with existing information to form a more faithful and detailed final geometry. We demonstrate the efficacy of our approach with multiple experiments on both synthetic and real images.

Viaarxiv icon

Learning Joint Surface Atlases

Jun 13, 2022
Theo Deprelle, Thibault Groueix, Noam Aigerman, Vladimir G. Kim, Mathieu Aubry

Figure 1 for Learning Joint Surface Atlases
Figure 2 for Learning Joint Surface Atlases
Figure 3 for Learning Joint Surface Atlases
Figure 4 for Learning Joint Surface Atlases

This paper describes new techniques for learning atlas-like representations of 3D surfaces, i.e. homeomorphic transformations from a 2D domain to surfaces. Compared to prior work, we propose two major contributions. First, instead of mapping a fixed 2D domain, such as a set of square patches, to the surface, we learn a continuous 2D domain with arbitrary topology by optimizing a point sampling distribution represented as a mixture of Gaussians. Second, we learn consistent mappings in both directions: charts, from the 3D surface to 2D domain, and parametrizations, their inverse. We demonstrate that this improves the quality of the learned surface representation, as well as its consistency in a collection of related shapes. It thus leads to improvements for applications such as correspondence estimation, texture transfer, and consistent UV mapping. As an additional technical contribution, we outline that, while incorporating normal consistency has clear benefits, it leads to issues in the optimization, and that these issues can be mitigated using a simple repulsive regularization. We demonstrate that our contributions provide better surface representation than existing baselines.

Viaarxiv icon

Neural Jacobian Fields: Learning Intrinsic Mappings of Arbitrary Meshes

May 05, 2022
Noam Aigerman, Kunal Gupta, Vladimir G. Kim, Siddhartha Chaudhuri, Jun Saito, Thibault Groueix

Figure 1 for Neural Jacobian Fields: Learning Intrinsic Mappings of Arbitrary Meshes
Figure 2 for Neural Jacobian Fields: Learning Intrinsic Mappings of Arbitrary Meshes
Figure 3 for Neural Jacobian Fields: Learning Intrinsic Mappings of Arbitrary Meshes
Figure 4 for Neural Jacobian Fields: Learning Intrinsic Mappings of Arbitrary Meshes

This paper introduces a framework designed to accurately predict piecewise linear mappings of arbitrary meshes via a neural network, enabling training and evaluating over heterogeneous collections of meshes that do not share a triangulation, as well as producing highly detail-preserving maps whose accuracy exceeds current state of the art. The framework is based on reducing the neural aspect to a prediction of a matrix for a single given point, conditioned on a global shape descriptor. The field of matrices is then projected onto the tangent bundle of the given mesh, and used as candidate jacobians for the predicted map. The map is computed by a standard Poisson solve, implemented as a differentiable layer with cached pre-factorization for efficient training. This construction is agnostic to the triangulation of the input, thereby enabling applications on datasets with varying triangulations. At the same time, by operating in the intrinsic gradient domain of each individual mesh, it allows the framework to predict highly-accurate mappings. We validate these properties by conducting experiments over a broad range of scenarios, from semantic ones such as morphing, registration, and deformation transfer, to optimization-based ones, such as emulating elastic deformations and contact correction, as well as being the first work, to our knowledge, to tackle the task of learning to compute UV parameterizations of arbitrary meshes. The results exhibit the high accuracy of the method as well as its versatility, as it is readily applied to the above scenarios without any changes to the framework.

Viaarxiv icon