Alert button
Picture for Georgia Gkioxari

Georgia Gkioxari

Alert button

Objaverse-XL: A Universe of 10M+ 3D Objects

Jul 11, 2023
Matt Deitke, Ruoshi Liu, Matthew Wallingford, Huong Ngo, Oscar Michel, Aditya Kusupati, Alan Fan, Christian Laforte, Vikram Voleti, Samir Yitzhak Gadre, Eli VanderBilt, Aniruddha Kembhavi, Carl Vondrick, Georgia Gkioxari, Kiana Ehsani, Ludwig Schmidt, Ali Farhadi

Figure 1 for Objaverse-XL: A Universe of 10M+ 3D Objects
Figure 2 for Objaverse-XL: A Universe of 10M+ 3D Objects
Figure 3 for Objaverse-XL: A Universe of 10M+ 3D Objects
Figure 4 for Objaverse-XL: A Universe of 10M+ 3D Objects

Natural language processing and 2D vision models have attained remarkable proficiency on many tasks primarily by escalating the scale of training data. However, 3D vision tasks have not seen the same progress, in part due to the challenges of acquiring high-quality 3D data. In this work, we present Objaverse-XL, a dataset of over 10 million 3D objects. Our dataset comprises deduplicated 3D objects from a diverse set of sources, including manually designed objects, photogrammetry scans of landmarks and everyday items, and professional scans of historic and antique artifacts. Representing the largest scale and diversity in the realm of 3D datasets, Objaverse-XL enables significant new possibilities for 3D vision. Our experiments demonstrate the improvements enabled with the scale provided by Objaverse-XL. We show that by training Zero123 on novel view synthesis, utilizing over 100 million multi-view rendered images, we achieve strong zero-shot generalization abilities. We hope that releasing Objaverse-XL will enable further innovations in the field of 3D vision at scale.

Viaarxiv icon

Multiview Compressive Coding for 3D Reconstruction

Jan 19, 2023
Chao-Yuan Wu, Justin Johnson, Jitendra Malik, Christoph Feichtenhofer, Georgia Gkioxari

Figure 1 for Multiview Compressive Coding for 3D Reconstruction
Figure 2 for Multiview Compressive Coding for 3D Reconstruction
Figure 3 for Multiview Compressive Coding for 3D Reconstruction
Figure 4 for Multiview Compressive Coding for 3D Reconstruction

A central goal of visual recognition is to understand objects and scenes from a single image. 2D recognition has witnessed tremendous progress thanks to large-scale learning and general-purpose representations. Comparatively, 3D poses new challenges stemming from occlusions not depicted in the image. Prior works try to overcome these by inferring from multiple views or rely on scarce CAD models and category-specific priors which hinder scaling to novel settings. In this work, we explore single-view 3D reconstruction by learning generalizable representations inspired by advances in self-supervised learning. We introduce a simple framework that operates on 3D points of single objects or whole scenes coupled with category-agnostic large-scale training from diverse RGB-D videos. Our model, Multiview Compressive Coding (MCC), learns to compress the input appearance and geometry to predict the 3D structure by querying a 3D-aware decoder. MCC's generality and efficiency allow it to learn from large-scale and diverse data sources with strong generalization to novel objects imagined by DALL$\cdot$E 2 or captured in-the-wild with an iPhone.

* Project page: https://mcc3d.github.io/ 
Viaarxiv icon

BKinD-3D: Self-Supervised 3D Keypoint Discovery from Multi-View Videos

Dec 14, 2022
Jennifer J. Sun, Pierre Karashchuk, Amil Dravid, Serim Ryou, Sonia Fereidooni, John Tuthill, Aggelos Katsaggelos, Bingni W. Brunton, Georgia Gkioxari, Ann Kennedy, Yisong Yue, Pietro Perona

Figure 1 for BKinD-3D: Self-Supervised 3D Keypoint Discovery from Multi-View Videos
Figure 2 for BKinD-3D: Self-Supervised 3D Keypoint Discovery from Multi-View Videos
Figure 3 for BKinD-3D: Self-Supervised 3D Keypoint Discovery from Multi-View Videos
Figure 4 for BKinD-3D: Self-Supervised 3D Keypoint Discovery from Multi-View Videos

Quantifying motion in 3D is important for studying the behavior of humans and other animals, but manual pose annotations are expensive and time-consuming to obtain. Self-supervised keypoint discovery is a promising strategy for estimating 3D poses without annotations. However, current keypoint discovery approaches commonly process single 2D views and do not operate in the 3D space. We propose a new method to perform self-supervised keypoint discovery in 3D from multi-view videos of behaving agents, without any keypoint or bounding box supervision in 2D or 3D. Our method uses an encoder-decoder architecture with a 3D volumetric heatmap, trained to reconstruct spatiotemporal differences across multiple views, in addition to joint length constraints on a learned 3D skeleton of the subject. In this way, we discover keypoints without requiring manual supervision in videos of humans and rats, demonstrating the potential of 3D keypoint discovery for studying behavior.

Viaarxiv icon

Omni3D: A Large Benchmark and Model for 3D Object Detection in the Wild

Jul 21, 2022
Garrick Brazil, Julian Straub, Nikhila Ravi, Justin Johnson, Georgia Gkioxari

Figure 1 for Omni3D: A Large Benchmark and Model for 3D Object Detection in the Wild
Figure 2 for Omni3D: A Large Benchmark and Model for 3D Object Detection in the Wild
Figure 3 for Omni3D: A Large Benchmark and Model for 3D Object Detection in the Wild
Figure 4 for Omni3D: A Large Benchmark and Model for 3D Object Detection in the Wild

Recognizing scenes and objects in 3D from a single image is a longstanding goal of computer vision with applications in robotics and AR/VR. For 2D recognition, large datasets and scalable solutions have led to unprecedented advances. In 3D, existing benchmarks are small in size and approaches specialize in few object categories and specific domains, e.g. urban driving scenes. Motivated by the success of 2D recognition, we revisit the task of 3D object detection by introducing a large benchmark, called Omni3D. Omni3D re-purposes and combines existing datasets resulting in 234k images annotated with more than 3 million instances and 97 categories.3D detection at such scale is challenging due to variations in camera intrinsics and the rich diversity of scene and object types. We propose a model, called Cube R-CNN, designed to generalize across camera and scene types with a unified approach. We show that Cube R-CNN outperforms prior works on the larger Omni3D and existing benchmarks. Finally, we prove that Omni3D is a powerful dataset for 3D object recognition, show that it improves single-dataset performance and can accelerate learning on new smaller datasets via pre-training.

* Project website: https://garrickbrazil.com/omni3d 
Viaarxiv icon

Learning 3D Object Shape and Layout without 3D Supervision

Jun 14, 2022
Georgia Gkioxari, Nikhila Ravi, Justin Johnson

Figure 1 for Learning 3D Object Shape and Layout without 3D Supervision
Figure 2 for Learning 3D Object Shape and Layout without 3D Supervision
Figure 3 for Learning 3D Object Shape and Layout without 3D Supervision
Figure 4 for Learning 3D Object Shape and Layout without 3D Supervision

A 3D scene consists of a set of objects, each with a shape and a layout giving their position in space. Understanding 3D scenes from 2D images is an important goal, with applications in robotics and graphics. While there have been recent advances in predicting 3D shape and layout from a single image, most approaches rely on 3D ground truth for training which is expensive to collect at scale. We overcome these limitations and propose a method that learns to predict 3D shape and layout for objects without any ground truth shape or layout information: instead we rely on multi-view images with 2D supervision which can more easily be collected at scale. Through extensive experiments on 3D Warehouse, Hypersim, and ScanNet we demonstrate that our approach scales to large datasets of realistic images, and compares favorably to methods relying on 3D ground truth. On Hypersim and ScanNet where reliable 3D ground truth is not available, our approach outperforms supervised approaches trained on smaller and less diverse datasets.

* CVPR 2022, project page: https://gkioxari.github.io/usl/ 
Viaarxiv icon

Recognizing Scenes from Novel Viewpoints

Dec 02, 2021
Shengyi Qian, Alexander Kirillov, Nikhila Ravi, Devendra Singh Chaplot, Justin Johnson, David F. Fouhey, Georgia Gkioxari

Figure 1 for Recognizing Scenes from Novel Viewpoints
Figure 2 for Recognizing Scenes from Novel Viewpoints
Figure 3 for Recognizing Scenes from Novel Viewpoints
Figure 4 for Recognizing Scenes from Novel Viewpoints

Humans can perceive scenes in 3D from a handful of 2D views. For AI agents, the ability to recognize a scene from any viewpoint given only a few images enables them to efficiently interact with the scene and its objects. In this work, we attempt to endow machines with this ability. We propose a model which takes as input a few RGB images of a new scene and recognizes the scene from novel viewpoints by segmenting it into semantic categories. All this without access to the RGB images from those views. We pair 2D scene recognition with an implicit 3D representation and learn from multi-view 2D annotations of hundreds of scenes without any 3D supervision beyond camera poses. We experiment on challenging datasets and demonstrate our model's ability to jointly capture semantics and geometry of novel scenes with diverse layouts, object types and shapes.

Viaarxiv icon

Differentiable Stereopsis: Meshes from multiple views using differentiable rendering

Oct 11, 2021
Shubham Goel, Georgia Gkioxari, Jitendra Malik

Figure 1 for Differentiable Stereopsis: Meshes from multiple views using differentiable rendering
Figure 2 for Differentiable Stereopsis: Meshes from multiple views using differentiable rendering
Figure 3 for Differentiable Stereopsis: Meshes from multiple views using differentiable rendering
Figure 4 for Differentiable Stereopsis: Meshes from multiple views using differentiable rendering

We propose Differentiable Stereopsis, a multi-view stereo approach that reconstructs shape and texture from few input views and noisy cameras. We pair traditional stereopsis and modern differentiable rendering to build an end-to-end model which predicts textured 3D meshes of objects with varying topologies and shape. We frame stereopsis as an optimization problem and simultaneously update shape and cameras via simple gradient descent. We run an extensive quantitative analysis and compare to traditional multi-view stereo techniques and state-of-the-art learning based methods. We show compelling reconstructions on challenging real-world scenes and for an abundance of object types with complex shape, topology and texture. Project webpage: https://shubham-goel.github.io/ds/

* https://shubham-goel.github.io/ds/ 
Viaarxiv icon

Compressed Object Detection

Feb 04, 2021
Gedeon Muhawenayo, Georgia Gkioxari

Figure 1 for Compressed Object Detection

Deep learning approaches have achieved unprecedented performance in visual recognition tasks such as object detection and pose estimation. However, state-of-the-art models have millions of parameters represented as floats which make them computationally expensive and constrain their deployment on hardware such as mobile phones and IoT nodes. Most commonly, activations of deep neural networks tend to be sparse thus proving that models are over parametrized with redundant neurons. Model compression techniques, such as pruning and quantization, have recently shown promising results by improving model complexity with little loss in performance. In this work, we extended pruning, a compression technique that discards unnecessary model connections, and weight sharing techniques for the task of object detection. With our approach, we are able to compress a state-of-the-art object detection model by 30.0% without a loss in performance. We also show that our compressed model can be easily initialized with existing pre-trained weights, and thus is able to fully utilize published state-of-the-art model zoos.

Viaarxiv icon

Accelerating 3D Deep Learning with PyTorch3D

Jul 16, 2020
Nikhila Ravi, Jeremy Reizenstein, David Novotny, Taylor Gordon, Wan-Yen Lo, Justin Johnson, Georgia Gkioxari

Figure 1 for Accelerating 3D Deep Learning with PyTorch3D
Figure 2 for Accelerating 3D Deep Learning with PyTorch3D
Figure 3 for Accelerating 3D Deep Learning with PyTorch3D
Figure 4 for Accelerating 3D Deep Learning with PyTorch3D

Deep learning has significantly improved 2D image recognition. Extending into 3D may advance many new applications including autonomous vehicles, virtual and augmented reality, authoring 3D content, and even improving 2D recognition. However despite growing interest, 3D deep learning remains relatively underexplored. We believe that some of this disparity is due to the engineering challenges involved in 3D deep learning, such as efficiently processing heterogeneous data and reframing graphics operations to be differentiable. We address these challenges by introducing PyTorch3D, a library of modular, efficient, and differentiable operators for 3D deep learning. It includes a fast, modular differentiable renderer for meshes and point clouds, enabling analysis-by-synthesis approaches. Compared with other differentiable renderers, PyTorch3D is more modular and efficient, allowing users to more easily extend it while also gracefully scaling to large meshes and images. We compare the PyTorch3D operators and renderer with other implementations and demonstrate significant speed and memory improvements. We also use PyTorch3D to improve the state-of-the-art for unsupervised 3D mesh and point cloud prediction from 2D images on ShapeNet. PyTorch3D is open-source and we hope it will help accelerate research in 3D deep learning.

* tech report 
Viaarxiv icon

3D Shape Reconstruction from Vision and Touch

Jul 07, 2020
Edward J. Smith, Roberto Calandra, Adriana Romero, Georgia Gkioxari, David Meger, Jitendra Malik, Michal Drozdzal

Figure 1 for 3D Shape Reconstruction from Vision and Touch
Figure 2 for 3D Shape Reconstruction from Vision and Touch
Figure 3 for 3D Shape Reconstruction from Vision and Touch
Figure 4 for 3D Shape Reconstruction from Vision and Touch

When a toddler is presented a new toy, their instinctual behaviour is to pick it up and inspect it with their hand and eyes in tandem, clearly searching over its surface to properly understand what they are playing with. Here, touch provides high fidelity localized information while vision provides complementary global context. However, in 3D shape reconstruction, the complementary fusion of visual and haptic modalities remains largely unexplored. In this paper, we study this problem and present an effective chart-based approach to fusing vision and touch, which leverages advances in graph convolutional networks. To do so, we introduce a dataset of simulated touch and vision signals from the interaction between a robotic hand and a large array of 3D objects. Our results show that (1) leveraging both vision and touch signals consistently improves single-modality baselines; (2) our approach outperforms alternative modality fusion methods and strongly benefits from the proposed chart-based structure; (3) the reconstruction quality increases with the number of grasps provided; and (4) the touch information not only enhances the reconstruction at the touch site but also extrapolates to its local neighborhood.

* Submitted for review 
Viaarxiv icon