The integration of sensor data is crucial in the field of robotics to take full advantage of the various sensors employed. One critical aspect of this integration is determining the extrinsic calibration parameters, such as the relative transformation, between each sensor. The use of data fusion between complementary sensors, such as radar and LiDAR, can provide significant benefits, particularly in harsh environments where accurate depth data is required. However, noise included in radar sensor data can make the estimation of extrinsic calibration challenging. To address this issue, we present a novel framework for the extrinsic calibration of radar and LiDAR sensors, utilizing CycleGAN as amethod of image-to-image translation. Our proposed method employs translating radar bird-eye-view images into LiDAR-style images to estimate the 3-DOF extrinsic parameters. The use of image registration techniques, as well as deskewing based on sensor odometry and B-spline interpolation, is employed to address the rolling shutter effect commonly present in spinning sensors. Our method demonstrates a notable improvement in extrinsic calibration compared to filter-based methods using the MulRan dataset.
Maritime radars are prevalently adopted to capture the vessel's omnidirectional data as imagery. Nevertheless, inherent challenges persist with marine radars, including limited frequency, suboptimal resolution, and indeterminate detections. Additionally, the scarcity of discernible landmarks in the vast marine expanses remains a challenge, resulting in consecutive scenes that often lack matching feature points. In this context, we introduce a resilient maritime radar scan representation LodeStar, and an enhanced feature extraction technique tailored for marine radar applications. Moreover, we embark on estimating marine radar odometry utilizing a semi-direct approach. LodeStar-based approach markedly attenuates the errors in odometry estimation, and our assertion is corroborated through meticulous experimental validation.
Odometry is crucial for robot navigation, particularly in situations where global positioning methods like global positioning system (GPS) are unavailable. The main goal of odometry is to predict the robot's motion and accurately determine its current location. Various sensors, such as wheel encoder, inertial measurement unit (IMU), camera, radar, and Light Detection and Ranging (LiDAR), are used for odometry in robotics. LiDAR, in particular, has gained attention for its ability to provide rich three-dimensional (3D) data and immunity to light variations. This survey aims to examine advancements in LiDAR odometry thoroughly. We start by exploring LiDAR technology and then scrutinize LiDAR odometry works, categorizing them based on their sensor integration approaches. These approaches include methods relying solely on LiDAR, those combining LiDAR with IMU, strategies involving multiple LiDARs, and methods fusing LiDAR with other sensor modalities. In conclusion, we address existing challenges and outline potential future directions in LiDAR odometry. Additionally, we analyze public datasets and evaluation methods for LiDAR odometry. To our knowledge, this survey is the first comprehensive exploration of LiDAR odometry.
Place recognition is crucial for robotic localization and loop closure in simultaneous localization and mapping (SLAM). Recently, LiDARs have gained popularity due to their robust sensing capability and measurement consistency, even in the illumination-variant environment, offering an advantage over traditional imaging sensors. Spinning LiDARs are widely accepted among many types, while non-repetitive scanning patterns have recently been utilized in robotic applications. Beyond the range measurements, some LiDARs offer additional measurements, such as reflectivity, Near Infrared (NIR), and velocity (e.g., FMCW LiDARs). Despite these advancements, a noticeable dearth of datasets comprehensively reflects the broad spectrum of LiDAR configurations optimized for place recognition. To tackle this issue, our paper proposes the HeLiPR dataset, curated especially for place recognition with heterogeneous LiDAR systems, embodying spatial-temporal variations. To the best of our knowledge, the HeLiPR dataset is the first heterogeneous LiDAR dataset designed to support inter-LiDAR place recognition with both non-repetitive and spinning LiDARs, accommodating different field of view (FOV) and varying numbers of rays. Encompassing the distinct LiDAR configurations, it captures varied environments ranging from urban cityscapes to high-dynamic freeways over a month, designed to enhance the adaptability and robustness of place recognition across diverse scenarios. Notably, the HeLiPR dataset also includes trajectories that parallel sequences from MulRan, underscoring its utility for research in heterogeneous LiDAR place recognition and long-term studies. The dataset is accessible at https: //sites.google.com/view/heliprdataset.
Transparent objects are encountered frequently in our daily lives, yet recognizing them poses challenges for conventional vision sensors due to their unique material properties, not being well perceived from RGB or depth cameras. Overcoming this limitation, thermal infrared cameras have emerged as a solution, offering improved visibility and shape information for transparent objects. In this paper, we present TRansPose, the first large-scale multispectral dataset that combines stereo RGB-D, thermal infrared (TIR) images, and object poses to promote transparent object research. The dataset includes 99 transparent objects, encompassing 43 household items, 27 recyclable trashes, 29 chemical laboratory equivalents, and 12 non-transparent objects. It comprises a vast collection of 333,819 images and 4,000,056 annotations, providing instance-level segmentation masks, ground-truth poses, and completed depth information. The data was acquired using a FLIR A65 thermal infrared (TIR) camera, two Intel RealSense L515 RGB-D cameras, and a Franka Emika Panda robot manipulator. Spanning 87 sequences, TRansPose covers various challenging real-life scenarios, including objects filled with water, diverse lighting conditions, heavy clutter, non-transparent or translucent containers, objects in plastic bags, and multi-stacked objects. TRansPose dataset can be accessed from the following link: https://sites.google.com/view/transpose-dataset
Due to the robustness in sensing, radar has been highlighted, overcoming harsh weather conditions such as fog and heavy snow. In this paper, we present a novel radar-only place recognition that measures the similarity score by utilizing Radon-transformed sinogram images and cross-correlation in frequency domain. Doing so achieves rigid transform invariance during place recognition, while ignoring the effects of radar multipath and ring noises. In addition, we compute the radar similarity distance using mutable threshold to mitigate variability of the similarity score, and reduce the time complexity of processing a copious radar data with hierarchical retrieval. We demonstrate the matching performance for both intra-session loop-closure detection and global place recognition using a publicly available imaging radar datasets. We verify reliable performance compared to existing stable radar place recognition method. Furthermore, codes for the proposed imaging radar place recognition is released for community.
In recent years, multiple Light Detection and Ranging (LiDAR) systems have grown in popularity due to their enhanced accuracy and stability from the increased field of view (FOV). However, integrating multiple LiDARs can be challenging, attributable to temporal and spatial discrepancies. Common practice is to transform points among sensors while requiring strict time synchronization or approximating transformation among sensor frames. Unlike existing methods, we elaborate the inter-sensor transformation using continuous-time (CT) inertial measurement unit (IMU) modeling and derive associated ambiguity as a point-wise uncertainty. This uncertainty, modeled by combining the state covariance with the acquisition time and point range, allows us to alleviate the strict time synchronization and to overcome FOV difference. The proposed method has been validated on both public and our datasets and is compatible with various LiDAR manufacturers and scanning patterns. We open-source the code for public access at https://github.com/minwoo0611/MA-LIO.