Alert button
Picture for David Novotny

David Novotny

Alert button

HoloFusion: Towards Photo-realistic 3D Generative Modeling

Aug 28, 2023
Animesh Karnewar, Niloy J. Mitra, Andrea Vedaldi, David Novotny

Figure 1 for HoloFusion: Towards Photo-realistic 3D Generative Modeling
Figure 2 for HoloFusion: Towards Photo-realistic 3D Generative Modeling
Figure 3 for HoloFusion: Towards Photo-realistic 3D Generative Modeling
Figure 4 for HoloFusion: Towards Photo-realistic 3D Generative Modeling

Diffusion-based image generators can now produce high-quality and diverse samples, but their success has yet to fully translate to 3D generation: existing diffusion methods can either generate low-resolution but 3D consistent outputs, or detailed 2D views of 3D objects but with potential structural defects and lacking view consistency or realism. We present HoloFusion, a method that combines the best of these approaches to produce high-fidelity, plausible, and diverse 3D samples while learning from a collection of multi-view 2D images only. The method first generates coarse 3D samples using a variant of the recently proposed HoloDiffusion generator. Then, it independently renders and upsamples a large number of views of the coarse 3D model, super-resolves them to add detail, and distills those into a single, high-fidelity implicit 3D representation, which also ensures view consistency of the final renders. The super-resolution network is trained as an integral part of HoloFusion, end-to-end, and the final distillation uses a new sampling scheme to capture the space of super-resolved signals. We compare our method against existing baselines, including DreamFusion, Get3D, EG3D, and HoloDiffusion, and achieve, to the best of our knowledge, the most realistic results on the challenging CO3Dv2 dataset.

* ICCV 2023 conference; project page at: https://holodiffusion.github.io/holofusion 
Viaarxiv icon

Replay: Multi-modal Multi-view Acted Videos for Casual Holography

Jul 22, 2023
Roman Shapovalov, Yanir Kleiman, Ignacio Rocco, David Novotny, Andrea Vedaldi, Changan Chen, Filippos Kokkinos, Ben Graham, Natalia Neverova

We introduce Replay, a collection of multi-view, multi-modal videos of humans interacting socially. Each scene is filmed in high production quality, from different viewpoints with several static cameras, as well as wearable action cameras, and recorded with a large array of microphones at different positions in the room. Overall, the dataset contains over 4000 minutes of footage and over 7 million timestamped high-resolution frames annotated with camera poses and partially with foreground masks. The Replay dataset has many potential applications, such as novel-view synthesis, 3D reconstruction, novel-view acoustic synthesis, human body and face analysis, and training generative models. We provide a benchmark for training and evaluating novel-view synthesis, with two scenarios of different difficulty. Finally, we evaluate several baseline state-of-the-art methods on the new benchmark.

* Accepted for ICCV 2023. Roman, Yanir, and Ignacio contributed equally 
Viaarxiv icon

PoseDiffusion: Solving Pose Estimation via Diffusion-aided Bundle Adjustment

Jun 28, 2023
Jianyuan Wang, Christian Rupprecht, David Novotny

Camera pose estimation is a long-standing computer vision problem that to date often relies on classical methods, such as handcrafted keypoint matching, RANSAC and bundle adjustment. In this paper, we propose to formulate the Structure from Motion (SfM) problem inside a probabilistic diffusion framework, modelling the conditional distribution of camera poses given input images. This novel view of an old problem has several advantages. (i) The nature of the diffusion framework mirrors the iterative procedure of bundle adjustment. (ii) The formulation allows a seamless integration of geometric constraints from epipolar geometry. (iii) It excels in typically difficult scenarios such as sparse views with wide baselines. (iv) The method can predict intrinsics and extrinsics for an arbitrary amount of images. We demonstrate that our method PoseDiffusion significantly improves over the classic SfM pipelines and the learned approaches on two real-world datasets. Finally, it is observed that our method can generalize across datasets without further training. Project page: https://posediffusion.github.io/

* 9 pages, 8 figures 
Viaarxiv icon

HOLODIFFUSION: Training a 3D Diffusion Model using 2D Images

Mar 29, 2023
Animesh Karnewar, Andrea Vedaldi, David Novotny, Niloy Mitra

Figure 1 for HOLODIFFUSION: Training a 3D Diffusion Model using 2D Images
Figure 2 for HOLODIFFUSION: Training a 3D Diffusion Model using 2D Images
Figure 3 for HOLODIFFUSION: Training a 3D Diffusion Model using 2D Images
Figure 4 for HOLODIFFUSION: Training a 3D Diffusion Model using 2D Images

Diffusion models have emerged as the best approach for generative modeling of 2D images. Part of their success is due to the possibility of training them on millions if not billions of images with a stable learning objective. However, extending these models to 3D remains difficult for two reasons. First, finding a large quantity of 3D training data is much more complex than for 2D images. Second, while it is conceptually trivial to extend the models to operate on 3D rather than 2D grids, the associated cubic growth in memory and compute complexity makes this infeasible. We address the first challenge by introducing a new diffusion setup that can be trained, end-to-end, with only posed 2D images for supervision; and the second challenge by proposing an image formation model that decouples model memory from spatial memory. We evaluate our method on real-world data, using the CO3D dataset which has not been used to train 3D generative models before. We show that our diffusion models are scalable, train robustly, and are competitive in terms of sample quality and fidelity to existing approaches for 3D generative modeling.

* CVPR 2023 conference; project page at: https://holodiffusion.github.io/ 
Viaarxiv icon

Real-time volumetric rendering of dynamic humans

Mar 21, 2023
Ignacio Rocco, Iurii Makarov, Filippos Kokkinos, David Novotny, Benjamin Graham, Natalia Neverova, Andrea Vedaldi

Figure 1 for Real-time volumetric rendering of dynamic humans
Figure 2 for Real-time volumetric rendering of dynamic humans
Figure 3 for Real-time volumetric rendering of dynamic humans
Figure 4 for Real-time volumetric rendering of dynamic humans

We present a method for fast 3D reconstruction and real-time rendering of dynamic humans from monocular videos with accompanying parametric body fits. Our method can reconstruct a dynamic human in less than 3h using a single GPU, compared to recent state-of-the-art alternatives that take up to 72h. These speedups are obtained by using a lightweight deformation model solely based on linear blend skinning, and an efficient factorized volumetric representation for modeling the shape and color of the person in canonical pose. Moreover, we propose a novel local ray marching rendering which, by exploiting standard GPU hardware and without any baking or conversion of the radiance field, allows visualizing the neural human on a mobile VR device at 40 frames per second with minimal loss of visual quality. Our experimental evaluation shows superior or competitive results with state-of-the art methods while obtaining large training speedup, using a simple model, and achieving real-time rendering.

* Project page: https://real-time-humans.github.io/ 
Viaarxiv icon

Self-Supervised Correspondence Estimation via Multiview Registration

Dec 06, 2022
Mohamed El Banani, Ignacio Rocco, David Novotny, Andrea Vedaldi, Natalia Neverova, Justin Johnson, Benjamin Graham

Figure 1 for Self-Supervised Correspondence Estimation via Multiview Registration
Figure 2 for Self-Supervised Correspondence Estimation via Multiview Registration
Figure 3 for Self-Supervised Correspondence Estimation via Multiview Registration
Figure 4 for Self-Supervised Correspondence Estimation via Multiview Registration

Video provides us with the spatio-temporal consistency needed for visual learning. Recent approaches have utilized this signal to learn correspondence estimation from close-by frame pairs. However, by only relying on close-by frame pairs, those approaches miss out on the richer long-range consistency between distant overlapping frames. To address this, we propose a self-supervised approach for correspondence estimation that learns from multiview consistency in short RGB-D video sequences. Our approach combines pairwise correspondence estimation and registration with a novel SE(3) transformation synchronization algorithm. Our key insight is that self-supervised multiview registration allows us to obtain correspondences over longer time frames; increasing both the diversity and difficulty of sampled pairs. We evaluate our approach on indoor scenes for correspondence estimation and RGB-D pointcloud registration and find that we perform on-par with supervised approaches.

* Accepted to WACV 2023. Project page: https://mbanani.github.io/syncmatch/ 
Viaarxiv icon

Common Pets in 3D: Dynamic New-View Synthesis of Real-Life Deformable Categories

Nov 07, 2022
Samarth Sinha, Roman Shapovalov, Jeremy Reizenstein, Ignacio Rocco, Natalia Neverova, Andrea Vedaldi, David Novotny

Figure 1 for Common Pets in 3D: Dynamic New-View Synthesis of Real-Life Deformable Categories
Figure 2 for Common Pets in 3D: Dynamic New-View Synthesis of Real-Life Deformable Categories
Figure 3 for Common Pets in 3D: Dynamic New-View Synthesis of Real-Life Deformable Categories
Figure 4 for Common Pets in 3D: Dynamic New-View Synthesis of Real-Life Deformable Categories

Obtaining photorealistic reconstructions of objects from sparse views is inherently ambiguous and can only be achieved by learning suitable reconstruction priors. Earlier works on sparse rigid object reconstruction successfully learned such priors from large datasets such as CO3D. In this paper, we extend this approach to dynamic objects. We use cats and dogs as a representative example and introduce Common Pets in 3D (CoP3D), a collection of crowd-sourced videos showing around 4,200 distinct pets. CoP3D is one of the first large-scale datasets for benchmarking non-rigid 3D reconstruction "in the wild". We also propose Tracker-NeRF, a method for learning 4D reconstruction from our dataset. At test time, given a small number of video frames of an unseen object, Tracker-NeRF predicts the trajectories of its 3D points and generates new views, interpolating viewpoint and time. Results on CoP3D reveal significantly better non-rigid new-view synthesis performance than existing baselines.

Viaarxiv icon

Nerfels: Renderable Neural Codes for Improved Camera Pose Estimation

Jun 04, 2022
Gil Avraham, Julian Straub, Tianwei Shen, Tsun-Yi Yang, Hugo Germain, Chris Sweeney, Vasileios Balntas, David Novotny, Daniel DeTone, Richard Newcombe

Figure 1 for Nerfels: Renderable Neural Codes for Improved Camera Pose Estimation
Figure 2 for Nerfels: Renderable Neural Codes for Improved Camera Pose Estimation
Figure 3 for Nerfels: Renderable Neural Codes for Improved Camera Pose Estimation
Figure 4 for Nerfels: Renderable Neural Codes for Improved Camera Pose Estimation

This paper presents a framework that combines traditional keypoint-based camera pose optimization with an invertible neural rendering mechanism. Our proposed 3D scene representation, Nerfels, is locally dense yet globally sparse. As opposed to existing invertible neural rendering systems which overfit a model to the entire scene, we adopt a feature-driven approach for representing scene-agnostic, local 3D patches with renderable codes. By modelling a scene only where local features are detected, our framework effectively generalizes to unseen local regions in the scene via an optimizable code conditioning mechanism in the neural renderer, all while maintaining the low memory footprint of a sparse 3D map representation. Our model can be incorporated to existing state-of-the-art hand-crafted and learned local feature pose estimators, yielding improved performance when evaluating on ScanNet for wide camera baseline scenarios.

* Published at CVPRW with supplementary material 
Viaarxiv icon

iSDF: Real-Time Neural Signed Distance Fields for Robot Perception

Apr 05, 2022
Joseph Ortiz, Alexander Clegg, Jing Dong, Edgar Sucar, David Novotny, Michael Zollhoefer, Mustafa Mukadam

Figure 1 for iSDF: Real-Time Neural Signed Distance Fields for Robot Perception
Figure 2 for iSDF: Real-Time Neural Signed Distance Fields for Robot Perception
Figure 3 for iSDF: Real-Time Neural Signed Distance Fields for Robot Perception
Figure 4 for iSDF: Real-Time Neural Signed Distance Fields for Robot Perception

We present iSDF, a continual learning system for real-time signed distance field (SDF) reconstruction. Given a stream of posed depth images from a moving camera, it trains a randomly initialised neural network to map input 3D coordinate to approximate signed distance. The model is self-supervised by minimising a loss that bounds the predicted signed distance using the distance to the closest sampled point in a batch of query points that are actively sampled. In contrast to prior work based on voxel grids, our neural method is able to provide adaptive levels of detail with plausible filling in of partially observed regions and denoising of observations, all while having a more compact representation. In evaluations against alternative methods on real and synthetic datasets of indoor environments, we find that iSDF produces more accurate reconstructions, and better approximations of collision costs and gradients useful for downstream planners in domains from navigation to manipulation. Code and video results can be found at our project page: https://joeaortiz.github.io/iSDF/ .

* Project page: https://joeaortiz.github.io/iSDF/ 
Viaarxiv icon

Common Objects in 3D: Large-Scale Learning and Evaluation of Real-life 3D Category Reconstruction

Sep 01, 2021
Jeremy Reizenstein, Roman Shapovalov, Philipp Henzler, Luca Sbordone, Patrick Labatut, David Novotny

Figure 1 for Common Objects in 3D: Large-Scale Learning and Evaluation of Real-life 3D Category Reconstruction
Figure 2 for Common Objects in 3D: Large-Scale Learning and Evaluation of Real-life 3D Category Reconstruction
Figure 3 for Common Objects in 3D: Large-Scale Learning and Evaluation of Real-life 3D Category Reconstruction
Figure 4 for Common Objects in 3D: Large-Scale Learning and Evaluation of Real-life 3D Category Reconstruction

Traditional approaches for learning 3D object categories have been predominantly trained and evaluated on synthetic datasets due to the unavailability of real 3D-annotated category-centric data. Our main goal is to facilitate advances in this field by collecting real-world data in a magnitude similar to the existing synthetic counterparts. The principal contribution of this work is thus a large-scale dataset, called Common Objects in 3D, with real multi-view images of object categories annotated with camera poses and ground truth 3D point clouds. The dataset contains a total of 1.5 million frames from nearly 19,000 videos capturing objects from 50 MS-COCO categories and, as such, it is significantly larger than alternatives both in terms of the number of categories and objects. We exploit this new dataset to conduct one of the first large-scale "in-the-wild" evaluations of several new-view-synthesis and category-centric 3D reconstruction methods. Finally, we contribute NerFormer - a novel neural rendering method that leverages the powerful Transformer to reconstruct an object given a small number of its views. The CO3D dataset is available at https://github.com/facebookresearch/co3d .

* International Conference on Computer Vision, 2021  
Viaarxiv icon