Alert button
Picture for Jamie Watson

Jamie Watson

Alert button

Virtual Occlusions Through Implicit Depth

May 11, 2023
Jamie Watson, Mohamed Sayed, Zawar Qureshi, Gabriel J. Brostow, Sara Vicente, Oisin Mac Aodha, Michael Firman

Figure 1 for Virtual Occlusions Through Implicit Depth
Figure 2 for Virtual Occlusions Through Implicit Depth
Figure 3 for Virtual Occlusions Through Implicit Depth
Figure 4 for Virtual Occlusions Through Implicit Depth

For augmented reality (AR), it is important that virtual assets appear to `sit among' real world objects. The virtual element should variously occlude and be occluded by real matter, based on a plausible depth ordering. This occlusion should be consistent over time as the viewer's camera moves. Unfortunately, small mistakes in the estimated scene depth can ruin the downstream occlusion mask, and thereby the AR illusion. Especially in real-time settings, depths inferred near boundaries or across time can be inconsistent. In this paper, we challenge the need for depth-regression as an intermediate step. We instead propose an implicit model for depth and use that to predict the occlusion mask directly. The inputs to our network are one or more color images, plus the known depths of any virtual geometry. We show how our occlusion predictions are more accurate and more temporally stable than predictions derived from traditional depth-estimation models. We obtain state-of-the-art occlusion results on the challenging ScanNetv2 dataset and superior qualitative results on real scenes.

* Accepted to CVPR 2023 
Viaarxiv icon

SimpleRecon: 3D Reconstruction Without 3D Convolutions

Aug 31, 2022
Mohamed Sayed, John Gibson, Jamie Watson, Victor Prisacariu, Michael Firman, Clément Godard

Figure 1 for SimpleRecon: 3D Reconstruction Without 3D Convolutions
Figure 2 for SimpleRecon: 3D Reconstruction Without 3D Convolutions
Figure 3 for SimpleRecon: 3D Reconstruction Without 3D Convolutions
Figure 4 for SimpleRecon: 3D Reconstruction Without 3D Convolutions

Traditionally, 3D indoor scene reconstruction from posed images happens in two phases: per-image depth estimation, followed by depth merging and surface reconstruction. Recently, a family of methods have emerged that perform reconstruction directly in final 3D volumetric feature space. While these methods have shown impressive reconstruction results, they rely on expensive 3D convolutional layers, limiting their application in resource-constrained environments. In this work, we instead go back to the traditional route, and show how focusing on high quality multi-view depth prediction leads to highly accurate 3D reconstructions using simple off-the-shelf depth fusion. We propose a simple state-of-the-art multi-view depth estimator with two main contributions: 1) a carefully-designed 2D CNN which utilizes strong image priors alongside a plane-sweep feature volume and geometric losses, combined with 2) the integration of keyframe and geometric metadata into the cost volume which allows informed depth plane scoring. Our method achieves a significant lead over the current state-of-the-art for depth estimation and close or better for 3D reconstruction on ScanNet and 7-Scenes, yet still allows for online real-time low-memory reconstruction. Code, models and results are available at https://nianticlabs.github.io/simplerecon

* ECCV2022 version with improved timings. 14 pages + 5 pages of references 
Viaarxiv icon

Single Image Depth Estimation using Wavelet Decomposition

Jun 03, 2021
Michaël Ramamonjisoa, Michael Firman, Jamie Watson, Vincent Lepetit, Daniyar Turmukhambetov

Figure 1 for Single Image Depth Estimation using Wavelet Decomposition
Figure 2 for Single Image Depth Estimation using Wavelet Decomposition
Figure 3 for Single Image Depth Estimation using Wavelet Decomposition
Figure 4 for Single Image Depth Estimation using Wavelet Decomposition

We present a novel method for predicting accurate depths from monocular images with high efficiency. This optimal efficiency is achieved by exploiting wavelet decomposition, which is integrated in a fully differentiable encoder-decoder architecture. We demonstrate that we can reconstruct high-fidelity depth maps by predicting sparse wavelet coefficients. In contrast with previous works, we show that wavelet coefficients can be learned without direct supervision on coefficients. Instead we supervise only the final depth image that is reconstructed through the inverse wavelet transform. We additionally show that wavelet coefficients can be learned in fully self-supervised scenarios, without access to ground-truth depth. Finally, we apply our method to different state-of-the-art monocular depth estimation models, in each case giving similar or better results compared to the original model, while requiring less than half the multiply-adds in the decoder network. Code at https://github.com/nianticlabs/wavelet-monodepth

* CVPR 2021 
Viaarxiv icon

The Temporal Opportunist: Self-Supervised Multi-Frame Monocular Depth

Apr 29, 2021
Jamie Watson, Oisin Mac Aodha, Victor Prisacariu, Gabriel Brostow, Michael Firman

Figure 1 for The Temporal Opportunist: Self-Supervised Multi-Frame Monocular Depth
Figure 2 for The Temporal Opportunist: Self-Supervised Multi-Frame Monocular Depth
Figure 3 for The Temporal Opportunist: Self-Supervised Multi-Frame Monocular Depth
Figure 4 for The Temporal Opportunist: Self-Supervised Multi-Frame Monocular Depth

Self-supervised monocular depth estimation networks are trained to predict scene depth using nearby frames as a supervision signal during training. However, for many applications, sequence information in the form of video frames is also available at test time. The vast majority of monocular networks do not make use of this extra signal, thus ignoring valuable information that could be used to improve the predicted depth. Those that do, either use computationally expensive test-time refinement techniques or off-the-shelf recurrent networks, which only indirectly make use of the geometric information that is inherently available. We propose ManyDepth, an adaptive approach to dense depth estimation that can make use of sequence information at test time, when it is available. Taking inspiration from multi-view stereo, we propose a deep end-to-end cost volume based approach that is trained using self-supervision only. We present a novel consistency loss that encourages the network to ignore the cost volume when it is deemed unreliable, e.g. in the case of moving objects, and an augmentation scheme to cope with static cameras. Our detailed experiments on both KITTI and Cityscapes show that we outperform all published self-supervised baselines, including those that use single or multiple frames at test time.

* CVPR 2021 
Viaarxiv icon

Learning Stereo from Single Images

Aug 20, 2020
Jamie Watson, Oisin Mac Aodha, Daniyar Turmukhambetov, Gabriel J. Brostow, Michael Firman

Figure 1 for Learning Stereo from Single Images
Figure 2 for Learning Stereo from Single Images
Figure 3 for Learning Stereo from Single Images
Figure 4 for Learning Stereo from Single Images

Supervised deep networks are among the best methods for finding correspondences in stereo image pairs. Like all supervised approaches, these networks require ground truth data during training. However, collecting large quantities of accurate dense correspondence data is very challenging. We propose that it is unnecessary to have such a high reliance on ground truth depths or even corresponding stereo pairs. Inspired by recent progress in monocular depth estimation, we generate plausible disparity maps from single images. In turn, we use those flawed disparity maps in a carefully designed pipeline to generate stereo training pairs. Training in this manner makes it possible to convert any collection of single RGB images into stereo training data. This results in a significant reduction in human effort, with no need to collect real depths or to hand-design synthetic data. We can consequently train a stereo matching network from scratch on datasets like COCO, which were previously hard to exploit for stereo. Through extensive experiments we show that our approach outperforms stereo networks trained with standard synthetic datasets, when evaluated on KITTI, ETH3D, and Middlebury.

* Accepted as an oral presentation at ECCV 2020 
Viaarxiv icon

Footprints and Free Space from a Single Color Image

Apr 14, 2020
Jamie Watson, Michael Firman, Aron Monszpart, Gabriel J. Brostow

Figure 1 for Footprints and Free Space from a Single Color Image
Figure 2 for Footprints and Free Space from a Single Color Image
Figure 3 for Footprints and Free Space from a Single Color Image
Figure 4 for Footprints and Free Space from a Single Color Image

Understanding the shape of a scene from a single color image is a formidable computer vision task. However, most methods aim to predict the geometry of surfaces that are visible to the camera, which is of limited use when planning paths for robots or augmented reality agents. Such agents can only move when grounded on a traversable surface, which we define as the set of classes which humans can also walk over, such as grass, footpaths and pavement. Models which predict beyond the line of sight often parameterize the scene with voxels or meshes, which can be expensive to use in machine learning frameworks. We introduce a model to predict the geometry of both visible and occluded traversable surfaces, given a single RGB image as input. We learn from stereo video sequences, using camera poses, per-frame depth and semantic segmentation to form training data, which is used to supervise an image-to-image network. We train models from the KITTI driving dataset, the indoor Matterport dataset, and from our own casually captured stereo footage. We find that a surprisingly low bar for spatial coverage of training scenes is required. We validate our algorithm against a range of strong baselines, and include an assessment of our predictions for a path-planning task.

* Accepted to CVPR 2020 as an oral presentation 
Viaarxiv icon

Self-Supervised Monocular Depth Hints

Sep 19, 2019
Jamie Watson, Michael Firman, Gabriel J. Brostow, Daniyar Turmukhambetov

Figure 1 for Self-Supervised Monocular Depth Hints
Figure 2 for Self-Supervised Monocular Depth Hints
Figure 3 for Self-Supervised Monocular Depth Hints
Figure 4 for Self-Supervised Monocular Depth Hints

Monocular depth estimators can be trained with various forms of self-supervision from binocular-stereo data to circumvent the need for high-quality laser scans or other ground-truth data. The disadvantage, however, is that the photometric reprojection losses used with self-supervised learning typically have multiple local minima. These plausible-looking alternatives to ground truth can restrict what a regression network learns, causing it to predict depth maps of limited quality. As one prominent example, depth discontinuities around thin structures are often incorrectly estimated by current state-of-the-art methods. Here, we study the problem of ambiguous reprojections in depth prediction from stereo-based self-supervision, and introduce Depth Hints to alleviate their effects. Depth Hints are complementary depth suggestions obtained from simple off-the-shelf stereo algorithms. These hints enhance an existing photometric loss function, and are used to guide a network to learn better weights. They require no additional data, and are assumed to be right only sometimes. We show that using our Depth Hints gives a substantial boost when training several leading self-supervised-from-stereo models, not just our own. Further, combined with other good practices, we produce state-of-the-art depth predictions on the KITTI benchmark.

* Accepted to ICCV 2019 
Viaarxiv icon