Alert button
Picture for Soren Schwertfeger

Soren Schwertfeger

Alert button

Spotlights: Probing Shapes from Spherical Viewpoints

May 25, 2022
Jiaxin Wei, Lige Liu, Ran Cheng, Wenqing Jiang, Minghao Xu, Xinyu Jiang, Tao Sun, Soren Schwertfeger, Laurent Kneip

Figure 1 for Spotlights: Probing Shapes from Spherical Viewpoints
Figure 2 for Spotlights: Probing Shapes from Spherical Viewpoints
Figure 3 for Spotlights: Probing Shapes from Spherical Viewpoints
Figure 4 for Spotlights: Probing Shapes from Spherical Viewpoints

Recent years have witnessed the surge of learned representations that directly build upon point clouds. Though becoming increasingly expressive, most existing representations still struggle to generate ordered point sets. Inspired by spherical multi-view scanners, we propose a novel sampling model called Spotlights to represent a 3D shape as a compact 1D array of depth values. It simulates the configuration of cameras evenly distributed on a sphere, where each virtual camera casts light rays from its principal point through sample points on a small concentric spherical cap to probe for the possible intersections with the object surrounded by the sphere. The structured point cloud is hence given implicitly as a function of depths. We provide a detailed geometric analysis of this new sampling scheme and prove its effectiveness in the context of the point cloud completion task. Experimental results on both synthetic and real data demonstrate that our method achieves competitive accuracy and consistency while having a significantly reduced computational cost. Furthermore, we show superior performance on the downstream point cloud registration task over state-of-the-art completion methods.

* 17 pages 
Viaarxiv icon

Accurate calibration of multi-perspective cameras from a generalization of the hand-eye constraint

Feb 08, 2022
Yifu Wang, Wenqing Jiang, Kun Huang, Soren Schwertfeger, Laurent Kneip

Figure 1 for Accurate calibration of multi-perspective cameras from a generalization of the hand-eye constraint
Figure 2 for Accurate calibration of multi-perspective cameras from a generalization of the hand-eye constraint
Figure 3 for Accurate calibration of multi-perspective cameras from a generalization of the hand-eye constraint
Figure 4 for Accurate calibration of multi-perspective cameras from a generalization of the hand-eye constraint

Multi-perspective cameras are quickly gaining importance in many applications such as smart vehicles and virtual or augmented reality. However, a large system size or absence of overlap in neighbouring fields-of-view often complicate their calibration. We present a novel solution which relies on the availability of an external motion capture system. Our core contribution consists of an extension to the hand-eye calibration problem which jointly solves multi-eye-to-base problems in closed form. We furthermore demonstrate its equivalence to the multi-eye-in-hand problem. The practical validity of our approach is supported by our experiments, indicating that the method is highly efficient and accurate, and outperforms existing closed-form alternatives.

* accepted in the 2022 IEEE International Conference on Robotics and Automation (ICRA), Philadelphia (PA), USA 
Viaarxiv icon