Alert button
Picture for Ruizhen Hu

Ruizhen Hu

Alert button

Interaction-Driven Active 3D Reconstruction with Object Interiors

Oct 23, 2023
Zihao Yan, Fubao Su, Mingyang Wang, Ruizhen Hu, Hao Zhang, Hui Huang

Figure 1 for Interaction-Driven Active 3D Reconstruction with Object Interiors
Figure 2 for Interaction-Driven Active 3D Reconstruction with Object Interiors
Figure 3 for Interaction-Driven Active 3D Reconstruction with Object Interiors
Figure 4 for Interaction-Driven Active 3D Reconstruction with Object Interiors

We introduce an active 3D reconstruction method which integrates visual perception, robot-object interaction, and 3D scanning to recover both the exterior and interior, i.e., unexposed, geometries of a target 3D object. Unlike other works in active vision which focus on optimizing camera viewpoints to better investigate the environment, the primary feature of our reconstruction is an analysis of the interactability of various parts of the target object and the ensuing part manipulation by a robot to enable scanning of occluded regions. As a result, an understanding of part articulations of the target object is obtained on top of complete geometry acquisition. Our method operates fully automatically by a Fetch robot with built-in RGBD sensors. It iterates between interaction analysis and interaction-driven reconstruction, scanning and reconstructing detected moveable parts one at a time, where both the articulated part detection and mesh reconstruction are carried out by neural networks. In the final step, all the remaining, non-articulated parts, including all the interior structures that had been exposed by prior part manipulations and subsequently scanned, are reconstructed to complete the acquisition. We demonstrate the performance of our method via qualitative and quantitative evaluation, ablation studies, comparisons to alternatives, as well as experiments in a real environment.

* Accepted to SIGGRAPH Asia 2023, project page at https://vcc.tech/research/2023/InterRecon 
Viaarxiv icon

AffordPose: A Large-scale Dataset of Hand-Object Interactions with Affordance-driven Hand Pose

Sep 16, 2023
Juntao Jian, Xiuping Liu, Manyi Li, Ruizhen Hu, Jian Liu

Figure 1 for AffordPose: A Large-scale Dataset of Hand-Object Interactions with Affordance-driven Hand Pose
Figure 2 for AffordPose: A Large-scale Dataset of Hand-Object Interactions with Affordance-driven Hand Pose
Figure 3 for AffordPose: A Large-scale Dataset of Hand-Object Interactions with Affordance-driven Hand Pose
Figure 4 for AffordPose: A Large-scale Dataset of Hand-Object Interactions with Affordance-driven Hand Pose

How human interact with objects depends on the functional roles of the target objects, which introduces the problem of affordance-aware hand-object interaction. It requires a large number of human demonstrations for the learning and understanding of plausible and appropriate hand-object interactions. In this work, we present AffordPose, a large-scale dataset of hand-object interactions with affordance-driven hand pose. We first annotate the specific part-level affordance labels for each object, e.g. twist, pull, handle-grasp, etc, instead of the general intents such as use or handover, to indicate the purpose and guide the localization of the hand-object interactions. The fine-grained hand-object interactions reveal the influence of hand-centered affordances on the detailed arrangement of the hand poses, yet also exhibit a certain degree of diversity. We collect a total of 26.7K hand-object interactions, each including the 3D object shape, the part-level affordance label, and the manually adjusted hand poses. The comprehensive data analysis shows the common characteristics and diversity of hand-object interactions per affordance via the parameter statistics and contacting computation. We also conduct experiments on the tasks of hand-object affordance understanding and affordance-oriented hand-object interaction generation, to validate the effectiveness of our dataset in learning the fine-grained hand-object interactions. Project page: https://github.com/GentlesJan/AffordPose.

* Accepted by ICCV 2023 
Viaarxiv icon

Semi-Weakly Supervised Object Kinematic Motion Prediction

Apr 03, 2023
Gengxin Liu, Qian Sun, Haibin Huang, Chongyang Ma, Yulan Guo, Li Yi, Hui Huang, Ruizhen Hu

Figure 1 for Semi-Weakly Supervised Object Kinematic Motion Prediction
Figure 2 for Semi-Weakly Supervised Object Kinematic Motion Prediction
Figure 3 for Semi-Weakly Supervised Object Kinematic Motion Prediction
Figure 4 for Semi-Weakly Supervised Object Kinematic Motion Prediction

Given a 3D object, kinematic motion prediction aims to identify the mobile parts as well as the corresponding motion parameters. Due to the large variations in both topological structure and geometric details of 3D objects, this remains a challenging task and the lack of large scale labeled data also constrain the performance of deep learning based approaches. In this paper, we tackle the task of object kinematic motion prediction problem in a semi-weakly supervised manner. Our key observations are two-fold. First, although 3D dataset with fully annotated motion labels is limited, there are existing datasets and methods for object part semantic segmentation at large scale. Second, semantic part segmentation and mobile part segmentation is not always consistent but it is possible to detect the mobile parts from the underlying 3D structure. Towards this end, we propose a graph neural network to learn the map between hierarchical part-level segmentation and mobile parts parameters, which are further refined based on geometric alignment. This network can be first trained on PartNet-Mobility dataset with fully labeled mobility information and then applied on PartNet dataset with fine-grained and hierarchical part-level segmentation. The network predictions yield a large scale of 3D objects with pseudo labeled mobility information and can further be used for weakly-supervised learning with pre-existing segmentation. Our experiments show there are significant performance boosts with the augmented data for previous method designed for kinematic motion prediction on 3D partial scans.

* CVPR 2023 
Viaarxiv icon

Self-Supervised Category-Level Articulated Object Pose Estimation with Part-Level SE(3) Equivariance

Feb 28, 2023
Xueyi Liu, Ji Zhang, Ruizhen Hu, Haibin Huang, He Wang, Li Yi

Figure 1 for Self-Supervised Category-Level Articulated Object Pose Estimation with Part-Level SE(3) Equivariance
Figure 2 for Self-Supervised Category-Level Articulated Object Pose Estimation with Part-Level SE(3) Equivariance
Figure 3 for Self-Supervised Category-Level Articulated Object Pose Estimation with Part-Level SE(3) Equivariance
Figure 4 for Self-Supervised Category-Level Articulated Object Pose Estimation with Part-Level SE(3) Equivariance

Category-level articulated object pose estimation aims to estimate a hierarchy of articulation-aware object poses of an unseen articulated object from a known category. To reduce the heavy annotations needed for supervised learning methods, we present a novel self-supervised strategy that solves this problem without any human labels. Our key idea is to factorize canonical shapes and articulated object poses from input articulated shapes through part-level equivariant shape analysis. Specifically, we first introduce the concept of part-level SE(3) equivariance and devise a network to learn features of such property. Then, through a carefully designed fine-grained pose-shape disentanglement strategy, we expect that canonical spaces to support pose estimation could be induced automatically. Thus, we could further predict articulated object poses as per-part rigid transformations describing how parts transform from their canonical part spaces to the camera space. Extensive experiments demonstrate the effectiveness of our method on both complete and partial point clouds from synthetic and real articulated object datasets.

* ICLR 2023 
Viaarxiv icon

ARO-Net: Learning Neural Fields from Anchored Radial Observations

Dec 19, 2022
Yizhi Wang, Zeyu Huang, Ariel Shamir, Hui Huang, Hao Zhang, Ruizhen Hu

Figure 1 for ARO-Net: Learning Neural Fields from Anchored Radial Observations
Figure 2 for ARO-Net: Learning Neural Fields from Anchored Radial Observations
Figure 3 for ARO-Net: Learning Neural Fields from Anchored Radial Observations
Figure 4 for ARO-Net: Learning Neural Fields from Anchored Radial Observations

We introduce anchored radial observations (ARO), a novel shape encoding for learning neural field representation of shapes that is category-agnostic and generalizable amid significant shape variations. The main idea behind our work is to reason about shapes through partial observations from a set of viewpoints, called anchors. We develop a general and unified shape representation by employing a fixed set of anchors, via Fibonacci sampling, and designing a coordinate-based deep neural network to predict the occupancy value of a query point in space. Differently from prior neural implicit models, that use global shape feature, our shape encoder operates on contextual, query-specific features. To predict point occupancy, locally observed shape information from the perspective of the anchors surrounding the input query point are encoded and aggregated through an attention module, before implicit decoding is performed. We demonstrate the quality and generality of our network, coined ARO-Net, on surface reconstruction from sparse point clouds, with tests on novel and unseen object categories, "one-shape" training, and comparisons to state-of-the-art neural and classical methods for reconstruction and tessellation.

Viaarxiv icon

Asynchronous Collaborative Autoscanning with Mode Switching for Multi-Robot Scene Reconstruction

Oct 10, 2022
Junfu Guo, Changhao Li, Xi Xia, Ruizhen Hu, Ligang Liu

Figure 1 for Asynchronous Collaborative Autoscanning with Mode Switching for Multi-Robot Scene Reconstruction
Figure 2 for Asynchronous Collaborative Autoscanning with Mode Switching for Multi-Robot Scene Reconstruction
Figure 3 for Asynchronous Collaborative Autoscanning with Mode Switching for Multi-Robot Scene Reconstruction
Figure 4 for Asynchronous Collaborative Autoscanning with Mode Switching for Multi-Robot Scene Reconstruction

When conducting autonomous scanning for the online reconstruction of unknown indoor environments, robots have to be competent at exploring scene structure and reconstructing objects with high quality. Our key observation is that different tasks demand specialized scanning properties of robots: rapid moving speed and far vision for global exploration and slow moving speed and narrow vision for local object reconstruction, which are referred as two different scanning modes: explorer and reconstructor, respectively. When requiring multiple robots to collaborate for efficient exploration and fine-grained reconstruction, the questions on when to generate and how to assign those tasks should be carefully answered. Therefore, we propose a novel asynchronous collaborative autoscanning method with mode switching, which generates two kinds of scanning tasks with associated scanning modes, i.e., exploration task with explorer mode and reconstruction task with reconstructor mode, and assign them to the robots to execute in an asynchronous collaborative manner to highly boost the scanning efficiency and reconstruction quality. The task assignment is optimized by solving a modified Multi-Depot Multiple Traveling Salesman Problem (MDMTSP). Moreover, to further enhance the collaboration and increase the efficiency, we propose a task-flow model that actives the task generation and assignment process immediately when any of the robots finish all its tasks with no need to wait for all other robots to complete the tasks assigned in the previous iteration. Extensive experiments have been conducted to show the importance of each key component of our method and the superiority over previous methods in scanning efficiency and reconstruction quality.

* ACM Trans. Graph., Vol. 41, No. 6, Article 198. Publication date: December 2022  
* 13pages, 12 figures, Conference: SIGGRAPH Asia 2022 
Viaarxiv icon

Shape Completion with Points in the Shadow

Oct 04, 2022
Bowen Zhang, Xi Zhao, He Wang, Ruizhen Hu

Figure 1 for Shape Completion with Points in the Shadow
Figure 2 for Shape Completion with Points in the Shadow
Figure 3 for Shape Completion with Points in the Shadow
Figure 4 for Shape Completion with Points in the Shadow

Single-view point cloud completion aims to recover the full geometry of an object based on only limited observation, which is extremely hard due to the data sparsity and occlusion. The core challenge is to generate plausible geometries to fill the unobserved part of the object based on a partial scan, which is under-constrained and suffers from a huge solution space. Inspired by the classic shadow volume technique in computer graphics, we propose a new method to reduce the solution space effectively. Our method considers the camera a light source that casts rays toward the object. Such light rays build a reasonably constrained but sufficiently expressive basis for completion. The completion process is then formulated as a point displacement optimization problem. Points are initialized at the partial scan and then moved to their goal locations with two types of movements for each point: directional movements along the light rays and constrained local movement for shape refinement. We design neural networks to predict the ideal point movements to get the completion results. We demonstrate that our method is accurate, robust, and generalizable through exhaustive evaluation and comparison. Moreover, it outperforms state-of-the-art methods qualitatively and quantitatively on MVP datasets.

* SIGGRAPH Aisa 2022 Conference Paper 
Viaarxiv icon

Active Self-Training for Weakly Supervised 3D Scene Semantic Segmentation

Sep 15, 2022
Gengxin Liu, Oliver van Kaick, Hui Huang, Ruizhen Hu

Figure 1 for Active Self-Training for Weakly Supervised 3D Scene Semantic Segmentation
Figure 2 for Active Self-Training for Weakly Supervised 3D Scene Semantic Segmentation
Figure 3 for Active Self-Training for Weakly Supervised 3D Scene Semantic Segmentation
Figure 4 for Active Self-Training for Weakly Supervised 3D Scene Semantic Segmentation

Since the preparation of labeled data for training semantic segmentation networks of point clouds is a time-consuming process, weakly supervised approaches have been introduced to learn from only a small fraction of data. These methods are typically based on learning with contrastive losses while automatically deriving per-point pseudo-labels from a sparse set of user-annotated labels. In this paper, our key observation is that the selection of what samples to annotate is as important as how these samples are used for training. Thus, we introduce a method for weakly supervised segmentation of 3D scenes that combines self-training with active learning. The active learning selects points for annotation that likely result in performance improvements to the trained model, while the self-training makes efficient use of the user-provided labels for learning the model. We demonstrate that our approach leads to an effective method that provides improvements in scene segmentation over previous works and baselines, while requiring only a small number of user annotations.

* Computational Visual Media 2022  
Viaarxiv icon

Photo-to-Shape Material Transfer for Diverse Structures

May 09, 2022
Ruizhen Hu, Xiangyu Su, Xiangkai Chen, Oliver Van Kaick, Hui Huang

Figure 1 for Photo-to-Shape Material Transfer for Diverse Structures
Figure 2 for Photo-to-Shape Material Transfer for Diverse Structures
Figure 3 for Photo-to-Shape Material Transfer for Diverse Structures
Figure 4 for Photo-to-Shape Material Transfer for Diverse Structures

We introduce a method for assigning photorealistic relightable materials to 3D shapes in an automatic manner. Our method takes as input a photo exemplar of a real object and a 3D object with segmentation, and uses the exemplar to guide the assignment of materials to the parts of the shape, so that the appearance of the resulting shape is as similar as possible to the exemplar. To accomplish this goal, our method combines an image translation neural network with a material assignment neural network. The image translation network translates the color from the exemplar to a projection of the 3D shape and the part segmentation from the projection to the exemplar. Then, the material prediction network assigns materials from a collection of realistic materials to the projected parts, based on the translated images and perceptual similarity of the materials. One key idea of our method is to use the translation network to establish a correspondence between the exemplar and shape projection, which allows us to transfer materials between objects with diverse structures. Another key idea of our method is to use the two pairs of (color, segmentation) images provided by the image translation to guide the material assignment, which enables us to ensure the consistency in the assignment. We demonstrate that our method allows us to assign materials to shapes so that their appearances better resemble the input exemplars, improving the quality of the results over the state-of-the-art method, and allowing us to automatically create thousands of shapes with high-quality photorealistic materials. Code and data for this paper are available at https://github.com/XiangyuSu611/TMT.

Viaarxiv icon