Alert button
Picture for Guofan Fan

Guofan Fan

Alert button

Point-GCC: Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast

Jun 01, 2023
Guofan Fan, Zekun Qi, Wenkai Shi, Kaisheng Ma

Figure 1 for Point-GCC: Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast
Figure 2 for Point-GCC: Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast
Figure 3 for Point-GCC: Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast
Figure 4 for Point-GCC: Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast

Geometry and color information provided by the point clouds are both crucial for 3D scene understanding. Two pieces of information characterize the different aspects of point clouds, but existing methods lack an elaborate design for the discrimination and relevance. Hence we explore a 3D self-supervised paradigm that can better utilize the relations of point cloud information. Specifically, we propose a universal 3D scene pre-training framework via Geometry-Color Contrast (Point-GCC), which aligns geometry and color information using a Siamese network. To take care of actual application tasks, we design (i) hierarchical supervision with point-level contrast and reconstruct and object-level contrast based on the novel deep clustering module to close the gap between pre-training and downstream tasks; (ii) architecture-agnostic backbone to adapt for various downstream models. Benefiting from the object-level representation associated with downstream tasks, Point-GCC can directly evaluate model performance and the result demonstrates the effectiveness of our methods. Transfer learning results on a wide range of tasks also show consistent improvements across all datasets. e.g., new state-of-the-art object detection results on SUN RGB-D and S3DIS datasets. Codes will be released at https://github.com/Asterisci/Point-GCC.

Viaarxiv icon

Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining

Feb 05, 2023
Zekun Qi, Runpei Dong, Guofan Fan, Zheng Ge, Xiangyu Zhang, Kaisheng Ma, Li Yi

Figure 1 for Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining
Figure 2 for Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining
Figure 3 for Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining
Figure 4 for Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining

Mainstream 3D representation learning approaches are built upon contrastive or generative modeling pretext tasks, where great improvements in performance on various downstream tasks have been achieved. However, by investigating the methods of these two paradigms, we find that (i) contrastive models are data-hungry that suffer from a representation over-fitting issue; (ii) generative models have a data filling issue that shows inferior data scaling capacity compared to contrastive models. This motivates us to learn 3D representations by sharing the merits of both paradigms, which is non-trivial due to the pattern difference between the two paradigms. In this paper, we propose contrast with reconstruct (ReCon) that unifies these two paradigms. ReCon is trained to learn from both generative modeling teachers and cross-modal contrastive teachers through ensemble distillation, where the generative student guides the contrastive student. An encoder-decoder style ReCon-block is proposed that transfers knowledge through cross attention with stop-gradient, which avoids pretraining over-fitting and pattern difference issues. ReCon achieves a new state-of-the-art in 3D representation learning, e.g., 91.26% accuracy on ScanObjectNN. Codes will be released at https://github.com/qizekun/ReCon.

* Tech report 
Viaarxiv icon

Language-Assisted 3D Feature Learning for Semantic Scene Understanding

Dec 11, 2022
Junbo Zhang, Guofan Fan, Guanghan Wang, Zhengyuan Su, Kaisheng Ma, Li Yi

Figure 1 for Language-Assisted 3D Feature Learning for Semantic Scene Understanding
Figure 2 for Language-Assisted 3D Feature Learning for Semantic Scene Understanding
Figure 3 for Language-Assisted 3D Feature Learning for Semantic Scene Understanding
Figure 4 for Language-Assisted 3D Feature Learning for Semantic Scene Understanding

Learning descriptive 3D features is crucial for understanding 3D scenes with diverse objects and complex structures. However, it is usually unknown whether important geometric attributes and scene context obtain enough emphasis in an end-to-end trained 3D scene understanding network. To guide 3D feature learning toward important geometric attributes and scene context, we explore the help of textual scene descriptions. Given some free-form descriptions paired with 3D scenes, we extract the knowledge regarding the object relationships and object attributes. We then inject the knowledge to 3D feature learning through three classification-based auxiliary tasks. This language-assisted training can be combined with modern object detection and instance segmentation methods to promote 3D semantic scene understanding, especially in a label-deficient regime. Moreover, the 3D feature learned with language assistance is better aligned with the language features, which can benefit various 3D-language multimodal tasks. Experiments on several benchmarks of 3D-only and 3D-language tasks demonstrate the effectiveness of our language-assisted 3D feature learning. Code is available at https://github.com/Asterisci/Language-Assisted-3D.

* Accepted by AAAI 2023 
Viaarxiv icon