Alert button
Picture for Tixiao Shan

Tixiao Shan

Alert button

Virtual Maps for Autonomous Exploration of Cluttered Underwater Environments

Feb 16, 2022
Jinkun Wang, Fanfei Chen, Yewei Huang, John McConnell, Tixiao Shan, Brendan Englot

Figure 1 for Virtual Maps for Autonomous Exploration of Cluttered Underwater Environments
Figure 2 for Virtual Maps for Autonomous Exploration of Cluttered Underwater Environments
Figure 3 for Virtual Maps for Autonomous Exploration of Cluttered Underwater Environments
Figure 4 for Virtual Maps for Autonomous Exploration of Cluttered Underwater Environments

We consider the problem of autonomous mobile robot exploration in an unknown environment, taking into account a robot's coverage rate, map uncertainty, and state estimation uncertainty. This paper presents a novel exploration framework for underwater robots operating in cluttered environments, built upon simultaneous localization and mapping (SLAM) with imaging sonar. The proposed system comprises path generation, place recognition forecasting, belief propagation and utility evaluation using a virtual map, which estimates the uncertainty associated with map cells throughout a robot's workspace. We evaluate the performance of this framework in simulated experiments, showing that our algorithm maintains a high coverage rate during exploration while also maintaining low mapping and localization error. The real-world applicability of our framework is also demonstrated on an underwater remotely operated vehicle (ROV) exploring a harbor environment.

* Preprint; Accepted for publication in the IEEE Journal of Oceanic Engineering 
Viaarxiv icon

Zero-Shot Reinforcement Learning on Graphs for Autonomous Exploration Under Uncertainty

May 11, 2021
Fanfei Chen, Paul Szenher, Yewei Huang, Jinkun Wang, Tixiao Shan, Shi Bai, Brendan Englot

Figure 1 for Zero-Shot Reinforcement Learning on Graphs for Autonomous Exploration Under Uncertainty
Figure 2 for Zero-Shot Reinforcement Learning on Graphs for Autonomous Exploration Under Uncertainty
Figure 3 for Zero-Shot Reinforcement Learning on Graphs for Autonomous Exploration Under Uncertainty
Figure 4 for Zero-Shot Reinforcement Learning on Graphs for Autonomous Exploration Under Uncertainty

This paper studies the problem of autonomous exploration under localization uncertainty for a mobile robot with 3D range sensing. We present a framework for self-learning a high-performance exploration policy in a single simulation environment, and transferring it to other environments, which may be physical or virtual. Recent work in transfer learning achieves encouraging performance by domain adaptation and domain randomization to expose an agent to scenarios that fill the inherent gaps in sim2sim and sim2real approaches. However, it is inefficient to train an agent in environments with randomized conditions to learn the important features of its current state. An agent can use domain knowledge provided by human experts to learn efficiently. We propose a novel approach that uses graph neural networks in conjunction with deep reinforcement learning, enabling decision-making over graphs containing relevant exploration information provided by human experts to predict a robot's optimal sensing action in belief space. The policy, which is trained only in a single simulation environment, offers a real-time, scalable, and transferable decision-making strategy, resulting in zero-shot transfer to other simulation environments and even real-world environments.

Viaarxiv icon

LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping

Apr 22, 2021
Tixiao Shan, Brendan Englot, Carlo Ratti, Daniela Rus

Figure 1 for LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
Figure 2 for LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
Figure 3 for LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
Figure 4 for LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping

We propose a framework for tightly-coupled lidar-visual-inertial odometry via smoothing and mapping, LVI-SAM, that achieves real-time state estimation and map-building with high accuracy and robustness. LVI-SAM is built atop a factor graph and is composed of two sub-systems: a visual-inertial system (VIS) and a lidar-inertial system (LIS). The two sub-systems are designed in a tightly-coupled manner, in which the VIS leverages LIS estimation to facilitate initialization. The accuracy of the VIS is improved by extracting depth information for visual features using lidar measurements. In turn, the LIS utilizes VIS estimation for initial guesses to support scan-matching. Loop closures are first identified by the VIS and further refined by the LIS. LVI-SAM can also function when one of the two sub-systems fails, which increases its robustness in both texture-less and feature-less environments. LVI-SAM is extensively evaluated on datasets gathered from several platforms over a variety of scales and environments. Our implementation is available at https://git.io/lvi-sam

Viaarxiv icon

Robust Place Recognition using an Imaging Lidar

Mar 03, 2021
Tixiao Shan, Brendan Englot, Fabio Duarte, Carlo Ratti, Daniela Rus

Figure 1 for Robust Place Recognition using an Imaging Lidar
Figure 2 for Robust Place Recognition using an Imaging Lidar
Figure 3 for Robust Place Recognition using an Imaging Lidar
Figure 4 for Robust Place Recognition using an Imaging Lidar

We propose a methodology for robust, real-time place recognition using an imaging lidar, which yields image-quality high-resolution 3D point clouds. Utilizing the intensity readings of an imaging lidar, we project the point cloud and obtain an intensity image. ORB feature descriptors are extracted from the image and encoded into a bag-of-words vector. The vector, used to identify the point cloud, is inserted into a database that is maintained by DBoW for fast place recognition queries. The returned candidate is further validated by matching visual feature descriptors. To reject matching outliers, we apply PnP, which minimizes the reprojection error of visual features' positions in Euclidean space with their correspondences in 2D image space, using RANSAC. Combining the advantages from both camera and lidar-based place recognition approaches, our method is truly rotation-invariant and can tackle reverse revisiting and upside-down revisiting. The proposed method is evaluated on datasets gathered from a variety of platforms over different scales and environments. Our implementation is available at https://git.io/imaging-lidar-place-recognition

* ICRA 2021 
Viaarxiv icon

Roboat II: A Novel Autonomous Surface Vessel for Urban Environments

Aug 24, 2020
Wei Wang, Tixiao Shan, Pietro Leoni, David Fernandez-Gutierrez, Drew Meyers, Carlo Ratti, Daniela Rus

Figure 1 for Roboat II: A Novel Autonomous Surface Vessel for Urban Environments
Figure 2 for Roboat II: A Novel Autonomous Surface Vessel for Urban Environments
Figure 3 for Roboat II: A Novel Autonomous Surface Vessel for Urban Environments
Figure 4 for Roboat II: A Novel Autonomous Surface Vessel for Urban Environments

This paper presents a novel autonomous surface vessel (ASV), called Roboat II for urban transportation. Roboat II is capable of accurate simultaneous localization and mapping (SLAM), receding horizon tracking control and estimation, and path planning. Roboat II is designed to maximize the internal space for transport and can carry payloads several times of its own weight. Moreover, it is capable of holonomic motions to facilitate transporting, docking, and inter-connectivity between boats. The proposed SLAM system receives sensor data from a 3D LiDAR, an IMU, and a GPS, and utilizes a factor graph to tackle the multi-sensor fusion problem. To cope with the complex dynamics in the water, Roboat II employs an online nonlinear model predictive controller (NMPC), where we experimentally estimated the dynamical model of the vessel in order to achieve superior performance for tracking control. The states of Roboat II are simultaneously estimated using a nonlinear moving horizon estimation (NMHE) algorithm. Experiments demonstrate that Roboat II is able to successfully perform online mapping and localization, plan its path and robustly track the planned trajectory in the confined river, implying that this autonomous vessel holds the promise on potential applications in transporting humans and goods in many of the waterways nowadays.

* IROS2020 accepted 
Viaarxiv icon

A Receding Horizon Multi-Objective Planner for Autonomous Surface Vehicles in Urban Waterways

Jul 16, 2020
Tixiao Shan, Wei Wang, Brendan Englot, Carlo Ratti, Daniela Rus

Figure 1 for A Receding Horizon Multi-Objective Planner for Autonomous Surface Vehicles in Urban Waterways
Figure 2 for A Receding Horizon Multi-Objective Planner for Autonomous Surface Vehicles in Urban Waterways
Figure 3 for A Receding Horizon Multi-Objective Planner for Autonomous Surface Vehicles in Urban Waterways
Figure 4 for A Receding Horizon Multi-Objective Planner for Autonomous Surface Vehicles in Urban Waterways

We propose a novel receding horizon planner for an autonomous surface vehicle (ASV) path planning in urban waterways. The proposed planner is lightweight, as it requires no prior map and is suitable for deployment on platforms with limited computational resources. To find a feasible path in the presence of obstacles, the planner repeatedly generates a graph, which takes the dynamic constraints of the robot into account, using a global reference path. We also propose a novel method for multi-objective motion planning over the graph by leveraging the paradigm of lexicographic optimization and applying it for the first time to graph search within our receding horizon planner. The competing resources of interest are penalized hierarchically during the search. Higher-ranked resources cause a robot to incur non-negative costs over the paths traveled, which are occasionally zero-valued. This is intended to capture problems in which a robot must manage resources such as risk of collision. This leaves freedom for tie-breaking with respect to lower-priority resources; at the bottom of the hierarchy is a strictly positive quantity consumed by the robot, such as distance traveled, energy expended or time elapsed. We conduct experiments in both simulated and real-world environments to validate the proposed planner and demonstrate its capability for enabling ASV navigation in complex environments.

* 59th IEEE Conference on Decision and Control 
Viaarxiv icon

LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping

Jul 14, 2020
Tixiao Shan, Brendan Englot, Drew Meyers, Wei Wang, Carlo Ratti, Daniela Rus

Figure 1 for LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping
Figure 2 for LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping
Figure 3 for LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping
Figure 4 for LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping

We propose a framework for tightly-coupled lidar inertial odometry via smoothing and mapping, LIO-SAM, that achieves highly accurate, real-time mobile robot trajectory estimation and map-building. LIO-SAM formulates lidar-inertial odometry atop a factor graph, allowing a multitude of relative and absolute measurements, including loop closures, to be incorporated from different sources as factors into the system. The estimated motion from inertial measurement unit (IMU) pre-integration de-skews point clouds and produces an initial guess for lidar odometry optimization. The obtained lidar odometry solution is used to estimate the bias of the IMU. To ensure high performance in real-time, we marginalize old lidar scans for pose optimization, rather than matching lidar scans to a global map. Scan-matching at a local scale instead of a global scale significantly improves the real-time performance of the system, as does the selective introduction of keyframes, and an efficient sliding window approach that registers a new keyframe to a fixed-size set of prior ``sub-keyframes.'' The proposed method is extensively evaluated on datasets gathered from three platforms over various scales and environments.

* IROS 2020 
Viaarxiv icon