Alert button
Picture for Yufeng Yue

Yufeng Yue

Alert button

Human Demonstrations are Generalizable Knowledge for Robots

Dec 05, 2023
Guangyan Chen, Te Cui, Tianxing Zhou, Zicai Peng, Mengxiao Hu, Meiling Wang, Yi Yang, Yufeng Yue

Learning from human demonstrations is an emerging trend for designing intelligent robotic systems. However, previous methods typically regard videos as instructions, simply dividing them into action sequences for robotic repetition, which poses obstacles to generalization to diverse tasks or object instances. In this paper, we propose a different perspective, considering human demonstration videos not as mere instructions, but as a source of knowledge for robots. Motivated by this perspective and the remarkable comprehension and generalization capabilities exhibited by large language models (LLMs), we propose DigKnow, a method that DIstills Generalizable KNOWledge with a hierarchical structure. Specifically, DigKnow begins by converting human demonstration video frames into observation knowledge. This knowledge is then subjected to analysis to extract human action knowledge and further distilled into pattern knowledge compassing task and object instances, resulting in the acquisition of generalizable knowledge with a hierarchical structure. In settings with different tasks or object instances, DigKnow retrieves relevant knowledge for the current task and object instances. Subsequently, the LLM-based planner conducts planning based on the retrieved knowledge, and the policy executes actions in line with the plan to achieve the designated task. Utilizing the retrieved knowledge, we validate and rectify planning and execution outcomes, resulting in a substantial enhancement of the success rate. Experimental results across a range of tasks and scenes demonstrate the effectiveness of this approach in facilitating real-world robots to accomplish tasks with the knowledge derived from human demonstrations.

Viaarxiv icon

MC-NeRF: Muti-Camera Neural Radiance Fields for Muti-Camera Image Acquisition Systems

Sep 14, 2023
Yu Gao, Lutong Su, Hao Liang, Yufeng Yue, Yi Yang, Mengyin Fu

Figure 1 for MC-NeRF: Muti-Camera Neural Radiance Fields for Muti-Camera Image Acquisition Systems
Figure 2 for MC-NeRF: Muti-Camera Neural Radiance Fields for Muti-Camera Image Acquisition Systems
Figure 3 for MC-NeRF: Muti-Camera Neural Radiance Fields for Muti-Camera Image Acquisition Systems
Figure 4 for MC-NeRF: Muti-Camera Neural Radiance Fields for Muti-Camera Image Acquisition Systems

Neural Radiance Fields (NeRF) employ multi-view images for 3D scene representation and have shown remarkable performance. As one of the primary sources of multi-view images, multi-camera systems encounter challenges such as varying intrinsic parameters and frequent pose changes. Most previous NeRF-based methods often assume a global unique camera and seldom consider scenarios with multiple cameras. Besides, some pose-robust methods still remain susceptible to suboptimal solutions when poses are poor initialized. In this paper, we propose MC-NeRF, a method can jointly optimize both intrinsic and extrinsic parameters for bundle-adjusting Neural Radiance Fields. Firstly, we conduct a theoretical analysis to tackle the degenerate case and coupling issue that arise from the joint optimization between intrinsic and extrinsic parameters. Secondly, based on the proposed solutions, we introduce an efficient calibration image acquisition scheme for multi-camera systems, including the design of calibration object. Lastly, we present a global end-to-end network with training sequence that enables the regression of intrinsic and extrinsic parameters, along with the rendering network. Moreover, most existing datasets are designed for unique camera, we create a new dataset that includes four different styles of multi-camera acquisition systems, allowing readers to generate custom datasets. Experiments confirm the effectiveness of our method when each image corresponds to different camera parameters. Specifically, we adopt up to 110 images with 110 different intrinsic and extrinsic parameters, to achieve 3D scene representation without providing initial poses. The Code and supplementary materials are available at https://in2-viaun.github.io/MC-NeRF.

* This manuscript is currently under review 
Viaarxiv icon

PointGPT: Auto-regressively Generative Pre-training from Point Clouds

May 23, 2023
Guangyan Chen, Meiling Wang, Yi Yang, Kai Yu, Li Yuan, Yufeng Yue

Figure 1 for PointGPT: Auto-regressively Generative Pre-training from Point Clouds
Figure 2 for PointGPT: Auto-regressively Generative Pre-training from Point Clouds
Figure 3 for PointGPT: Auto-regressively Generative Pre-training from Point Clouds
Figure 4 for PointGPT: Auto-regressively Generative Pre-training from Point Clouds

Large language models (LLMs) based on the generative pre-training transformer (GPT) have demonstrated remarkable effectiveness across a diverse range of downstream tasks. Inspired by the advancements of the GPT, we present PointGPT, a novel approach that extends the concept of GPT to point clouds, addressing the challenges associated with disorder properties, low information density, and task gaps. Specifically, a point cloud auto-regressive generation task is proposed to pre-train transformer models. Our method partitions the input point cloud into multiple point patches and arranges them in an ordered sequence based on their spatial proximity. Then, an extractor-generator based transformer decoder, with a dual masking strategy, learns latent representations conditioned on the preceding point patches, aiming to predict the next one in an auto-regressive manner. Our scalable approach allows for learning high-capacity models that generalize well, achieving state-of-the-art performance on various downstream tasks. In particular, our approach achieves classification accuracies of 94.9% on the ModelNet40 dataset and 93.4% on the ScanObjectNN dataset, outperforming all other transformer models. Furthermore, our method also attains new state-of-the-art accuracies on all four few-shot learning benchmarks.

* 9 pages, 2 figures 
Viaarxiv icon

CRRS: Concentric Rectangles Regression Strategy for Multi-point Representation on Fisheye Images

Mar 26, 2023
Xihan Wang, Xi Xu, Yu Gao, Yi Yang, Yufeng Yue, Mengyin Fu

Figure 1 for CRRS: Concentric Rectangles Regression Strategy for Multi-point Representation on Fisheye Images
Figure 2 for CRRS: Concentric Rectangles Regression Strategy for Multi-point Representation on Fisheye Images
Figure 3 for CRRS: Concentric Rectangles Regression Strategy for Multi-point Representation on Fisheye Images
Figure 4 for CRRS: Concentric Rectangles Regression Strategy for Multi-point Representation on Fisheye Images

Modern object detectors take advantage of rectangular bounding boxes as a conventional way to represent objects. When it comes to fisheye images, rectangular boxes involve more background noise rather than semantic information. Although multi-point representation has been proposed, both the regression accuracy and convergence still perform inferior to the widely used rectangular boxes. In order to further exploit the advantages of multi-point representation for distorted images, Concentric Rectangles Regression Strategy(CRRS) is proposed in this work. We adopt smoother mean loss to allocate weights and discuss the effect of hyper-parameter to prediction results. Moreover, an accurate pixel-level method is designed to obtain irregular IoU for estimating detector performance. Compared with the previous work for muti-point representation, the experiments show that CRRS can improve the training performance both in accurate and stability. We also prove that multi-task weighting strategy facilitates regression process in this design.

Viaarxiv icon

DR-WLC: Dimensionality Reduction cognition for object detection and pose estimation by Watching, Learning and Checking

Jan 17, 2023
Yu Gao, Xi Xu, Tianji Jiang, Siyuan Chen, Yi Yang, Yufeng Yue, Mengyin Fu

Figure 1 for DR-WLC: Dimensionality Reduction cognition for object detection and pose estimation by Watching, Learning and Checking
Figure 2 for DR-WLC: Dimensionality Reduction cognition for object detection and pose estimation by Watching, Learning and Checking
Figure 3 for DR-WLC: Dimensionality Reduction cognition for object detection and pose estimation by Watching, Learning and Checking
Figure 4 for DR-WLC: Dimensionality Reduction cognition for object detection and pose estimation by Watching, Learning and Checking

Object detection and pose estimation are difficult tasks in robotics and autonomous driving. Existing object detection and pose estimation methods mostly adopt the same-dimensional data for training. For example, 2D object detection usually requires a large amount of 2D annotation data with high cost. Using high-dimensional information to supervise lower-dimensional tasks is a feasible way to reduce datasets size. In this work, the DR-WLC, a dimensionality reduction cognitive model, which can perform both object detection and pose estimation tasks at the same time is proposed. The model only requires 3D model of objects and unlabeled environment images (with or without objects) to finish the training. In addition, a bounding boxes generation strategy is also proposed to build the relationship between 3D model and 2D object detection task. Experiments show that our method can qualify the work without any manual annotations and it is easy to deploy for practical applications. Source code is at https://github.com/IN2-ViAUn/DR-WLC.

Viaarxiv icon

Full Transformer Framework for Robust Point Cloud Registration with Deep Information Interaction

Dec 17, 2021
Guangyan Chen, Meiling Wang, Yufeng Yue, Qingxiang Zhang, Li Yuan

Figure 1 for Full Transformer Framework for Robust Point Cloud Registration with Deep Information Interaction
Figure 2 for Full Transformer Framework for Robust Point Cloud Registration with Deep Information Interaction
Figure 3 for Full Transformer Framework for Robust Point Cloud Registration with Deep Information Interaction
Figure 4 for Full Transformer Framework for Robust Point Cloud Registration with Deep Information Interaction

Recent Transformer-based methods have achieved advanced performance in point cloud registration by utilizing advantages of the Transformer in order-invariance and modeling dependency to aggregate information. However, they still suffer from indistinct feature extraction, sensitivity to noise, and outliers. The reasons are: (1) the adoption of CNNs fails to model global relations due to their local receptive fields, resulting in extracted features susceptible to noise; (2) the shallow-wide architecture of Transformers and lack of positional encoding lead to indistinct feature extraction due to inefficient information interaction; (3) the omission of geometrical compatibility leads to inaccurate classification between inliers and outliers. To address above limitations, a novel full Transformer network for point cloud registration is proposed, named the Deep Interaction Transformer (DIT), which incorporates: (1) a Point Cloud Structure Extractor (PSE) to model global relations and retrieve structural information with Transformer encoders; (2) a deep-narrow Point Feature Transformer (PFT) to facilitate deep information interaction across two point clouds with positional encoding, such that Transformers can establish comprehensive associations and directly learn relative position between points; (3) a Geometric Matching-based Correspondence Confidence Evaluation (GMCCE) method to measure spatial consistency and estimate inlier confidence by designing the triangulated descriptor. Extensive experiments on clean, noisy, partially overlapping point cloud registration demonstrate that our method outperforms state-of-the-art methods.

* 10pages, 7figures 
Viaarxiv icon

The Hilti SLAM Challenge Dataset

Sep 23, 2021
Michael Helmberger, Kristian Morin, Nitish Kumar, Danwei Wang, Yufeng Yue, Giovanni Cioffi, Davide Scaramuzza

Figure 1 for The Hilti SLAM Challenge Dataset
Figure 2 for The Hilti SLAM Challenge Dataset
Figure 3 for The Hilti SLAM Challenge Dataset
Figure 4 for The Hilti SLAM Challenge Dataset

Accurate and robust pose estimation is a fundamental capability for autonomous systems to navigate, map and perform tasks. Particularly, construction environments pose challenging problem to Simultaneous Localization and Mapping (SLAM) algorithms due to sparsity, varying illumination conditions, and dynamic objects. Current academic research in SLAM is focused on developing more accurate and robust algorithms for example by fusing different sensor modalities. To help this research, we propose a new dataset, the Hilti SLAM Challenge Dataset. The sensor platform used to collect this dataset contains a number of visual, lidar and inertial sensors which have all been rigorously calibrated. All data is temporally aligned to support precise multi-sensor fusion. Each dataset includes accurate ground truth to allow direct testing of SLAM results. Raw data as well as intrinsic and extrinsic sensor calibration data from twelve datasets in various environments is provided. Each environment represents common scenarios found in building construction sites in various stages of completion.

Viaarxiv icon

Semantic Reinforced Attention Learning for Visual Place Recognition

Aug 19, 2021
Guohao Peng, Yufeng Yue, Jun Zhang, Zhenyu Wu, Xiaoyu Tang, Danwei Wang

Figure 1 for Semantic Reinforced Attention Learning for Visual Place Recognition
Figure 2 for Semantic Reinforced Attention Learning for Visual Place Recognition
Figure 3 for Semantic Reinforced Attention Learning for Visual Place Recognition
Figure 4 for Semantic Reinforced Attention Learning for Visual Place Recognition

Large-scale visual place recognition (VPR) is inherently challenging because not all visual cues in the image are beneficial to the task. In order to highlight the task-relevant visual cues in the feature embedding, the existing attention mechanisms are either based on artificial rules or trained in a thorough data-driven manner. To fill the gap between the two types, we propose a novel Semantic Reinforced Attention Learning Network (SRALNet), in which the inferred attention can benefit from both semantic priors and data-driven fine-tuning. The contribution lies in two-folds. (1) To suppress misleading local features, an interpretable local weighting scheme is proposed based on hierarchical feature distribution. (2) By exploiting the interpretability of the local weighting scheme, a semantic constrained initialization is proposed so that the local attention can be reinforced by semantic priors. Experiments demonstrate that our method outperforms state-of-the-art techniques on city-scale VPR benchmark datasets.

Viaarxiv icon