Alert button
Picture for Yanxu Li

Yanxu Li

Alert button

Efficient View Path Planning for Autonomous Implicit Reconstruction

Sep 27, 2022
Jing Zeng, Yanxu Li, Yunlong Ran, Shuo Li, Fei Gao, Lincheng Li, Shibo He, Jiming chen, Qi Ye

Figure 1 for Efficient View Path Planning for Autonomous Implicit Reconstruction
Figure 2 for Efficient View Path Planning for Autonomous Implicit Reconstruction
Figure 3 for Efficient View Path Planning for Autonomous Implicit Reconstruction
Figure 4 for Efficient View Path Planning for Autonomous Implicit Reconstruction

Implicit neural representations have shown promising potential for the 3D scene reconstruction. Recent work applies it to autonomous 3D reconstruction by learning information gain for view path planning. Effective as it is, the computation of the information gain is expensive, and compared with that using volumetric representations, collision checking using the implicit representation for a 3D point is much slower. In the paper, we propose to 1) leverage a neural network as an implicit function approximator for the information gain field and 2) combine the implicit fine-grained representation with coarse volumetric representations to improve efficiency. Further with the improved efficiency, we propose a novel informative path planning based on a graph-based planner. Our method demonstrates significant improvements in the reconstruction quality and planning efficiency compared with autonomous reconstructions with implicit and explicit representations. We deploy the method on a real UAV and the results show that our method can plan informative views and reconstruct a scene with high quality.

Viaarxiv icon

mmBody Benchmark: 3D Body Reconstruction Dataset and Analysis for Millimeter Wave Radar

Sep 12, 2022
Anjun Chen, Xiangyu Wang, Shaohao Zhu, Yanxu Li, Jiming Chen, Qi Ye

Figure 1 for mmBody Benchmark: 3D Body Reconstruction Dataset and Analysis for Millimeter Wave Radar
Figure 2 for mmBody Benchmark: 3D Body Reconstruction Dataset and Analysis for Millimeter Wave Radar
Figure 3 for mmBody Benchmark: 3D Body Reconstruction Dataset and Analysis for Millimeter Wave Radar
Figure 4 for mmBody Benchmark: 3D Body Reconstruction Dataset and Analysis for Millimeter Wave Radar

Millimeter Wave (mmWave) Radar is gaining popularity as it can work in adverse environments like smoke, rain, snow, poor lighting, etc. Prior work has explored the possibility of reconstructing 3D skeletons or meshes from the noisy and sparse mmWave Radar signals. However, it is unclear how accurately we can reconstruct the 3D body from the mmWave signals across scenes and how it performs compared with cameras, which are important aspects needed to be considered when either using mmWave radars alone or combining them with cameras. To answer these questions, an automatic 3D body annotation system is first designed and built up with multiple sensors to collect a large-scale dataset. The dataset consists of synchronized and calibrated mmWave radar point clouds and RGB(D) images in different scenes and skeleton/mesh annotations for humans in the scenes. With this dataset, we train state-of-the-art methods with inputs from different sensors and test them in various scenarios. The results demonstrate that 1) despite the noise and sparsity of the generated point clouds, the mmWave radar can achieve better reconstruction accuracy than the RGB camera but worse than the depth camera; 2) the reconstruction from the mmWave radar is affected by adverse weather conditions moderately while the RGB(D) camera is severely affected. Further, analysis of the dataset and the results shadow insights on improving the reconstruction from the mmWave radar and the combination of signals from different sensors.

* Multimedia 2022, 10 pages, 11 figures 
Viaarxiv icon