Abstract:Autonomous robot navigation can be particularly demanding, especially when the surrounding environment is not known and safety of the robot is crucial. This work relates to the synthesis of Control Barrier Functions (CBFs) through data for safe navigation in unknown environments. A novel methodology to jointly learn CBFs and corresponding safe controllers, in simulation, inspired by the State Dependent Riccati Equation (SDRE) is proposed. The CBF is used to obtain admissible commands from any nominal, possibly unsafe controller. An approach to apply the CBF inside a safety filter without the need for a consistent map or position estimate is developed. Subsequently, the resulting reactive safety filter is deployed on a multirotor platform integrating a LiDAR sensor both in simulation and real-world experiments.
Abstract:This paper presents field results and lessons learned from the deployment of aerial robots inside ship ballast tanks. Vessel tanks including ballast tanks and cargo holds present dark, dusty environments having simultaneously very narrow openings and wide open spaces that create several challenges for autonomous navigation and inspection operations. We present a system for vessel tank inspection using an aerial robot along with its autonomy modules. We show the results of autonomous exploration and visual inspection in 3 ships spanning across 7 distinct types of sections of the ballast tanks. Additionally, we comment on the lessons learned from the field and possible directions for future work. Finally, we release a dataset consisting of the data from these missions along with data collected with a handheld sensor stick.
Abstract:Enabling autonomous robots to operate robustly in challenging environments is necessary in a future with increased autonomy. For many autonomous systems, estimation and odometry remains a single point of failure, from which it can often be difficult, if not impossible, to recover. As such robust odometry solutions are of key importance. In this work a method for tightly-coupled LiDAR-Radar-Inertial fusion for odometry is proposed, enabling the mitigation of the effects of LiDAR degeneracy by leveraging a complementary perception modality while preserving the accuracy of LiDAR in well-conditioned environments. The proposed approach combines modalities in a factor graph-based windowed smoother with sensor information-specific factor formulations which enable, in the case of degeneracy, partial information to be conveyed to the graph along the non-degenerate axes. The proposed method is evaluated in real-world tests on a flying robot experiencing degraded conditions including geometric self-similarity as well as obscurant occlusion. For the benefit of the community we release the datasets presented: https://github.com/ntnu-arl/lidar_degeneracy_datasets.
Abstract:This paper introduces a Nonlinear Model Predictive Control (N-MPC) framework exploiting a Deep Neural Network for processing onboard-captured depth images for collision avoidance in trajectory-tracking tasks with UAVs. The network is trained on simulated depth images to output a collision score for queried 3D points within the sensor field of view. Then, this network is translated into an algebraic symbolic equation and included in the N-MPC, explicitly constraining predicted positions to be collision-free throughout the receding horizon. The N-MPC achieves real time control of a UAV with a control frequency of 100Hz. The proposed framework is validated through statistical analysis of the collision classifier network, as well as Gazebo simulations and real experiments to assess the resulting capabilities of the N-MPC to effectively avoid collisions in cluttered environments. The associated code is released open-source along with the training images.
Abstract:This work contributes a novel deep navigation policy that enables collision-free flight of aerial robots based on a modular approach exploiting deep collision encoding and reinforcement learning. The proposed solution builds upon a deep collision encoder that is trained on both simulated and real depth images using supervised learning such that it compresses the high-dimensional depth data to a low-dimensional latent space encoding collision information while accounting for the robot size. This compressed encoding is combined with an estimate of the robot's odometry and the desired target location to train a deep reinforcement learning navigation policy that offers low-latency computation and robust sim2real performance. A set of simulation and experimental studies in diverse environments are conducted and demonstrate the efficiency of the emerged behavior and its resilience in real-life deployments.
Abstract:Reliable offroad autonomy requires low-latency, high-accuracy state estimates of pose as well as velocity, which remain viable throughout environments with sub-optimal operating conditions for the utilized perception modalities. As state estimation remains a single point of failure system in the majority of aspiring autonomous systems, failing to address the environmental degradation the perception sensors could potentially experience given the operating conditions, can be a mission-critical shortcoming. In this work, a method for integration of radar velocity information in a LiDAR-inertial odometry solution is proposed, enabling consistent estimation performance even with degraded LiDAR-inertial odometry. The proposed method utilizes the direct velocity-measuring capabilities of an Frequency Modulated Continuous Wave (FMCW) radar sensor to enhance the LiDAR-inertial smoother solution onboard the vehicle through integration of the forward velocity measurement into the graph-based smoother. This leads to increased robustness in the overall estimation solution, even in the absence of LiDAR data. This method was validated by hardware experiments conducted onboard an all-terrain vehicle traveling at high speed, ~12 m/s, in demanding offroad environments.
Abstract:Aerial field robotics research represents the domain of study that aims to equip unmanned aerial vehicles - and as it pertains to this chapter, specifically Micro Aerial Vehicles (MAVs)- with the ability to operate in real-life environments that present challenges to safe navigation. We present the key elements of autonomy for MAVs that are resilient to collisions and sensing degradation, while operating under constrained computational resources. We overview aspects of the state of the art, outline bottlenecks to resilient navigation autonomy, and overview the field-readiness of MAVs. We conclude with notable contributions and discuss considerations for future research that are essential for resilience in aerial robotics.
Abstract:The typical point cloud sampling methods used in state estimation for mobile robots preserve a high level of point redundancy. The point redundancy slows down the estimation pipeline and can make real-time estimation drift in geometrically symmetrical and structureless environments. We propose a novel point cloud sampling method that is capable of lowering the effects of geometrical degeneracies by minimizing redundancy within the cloud. The proposed method is an alternative to the commonly used sparsification methods that normalize the density of points to comply with the constraints on the real-time capabilities of a robot. In contrast to density normalization, our method builds on the fact that linear and planar surfaces contain a high level of redundancy propagated into iterative estimation pipelines. We define the concept of gradient flow quantifying the surface underlying a point. We also show that maximizing the entropy of the gradient flow minimizes point redundancy for robot ego-motion estimation. We integrate the proposed method into the point-based KISS-ICP and feature-based LOAM odometry pipelines and evaluate it experimentally on KITTI, Hilti-Oxford, and custom datasets from multirotor UAVs. The experiments show that the proposed sampling technique outperforms state-of-the-art methods in well-conditioned as well as in geometrically-degenerated settings, in both accuracy and speed.
Abstract:This paper introduces the Terrain Recognition And Contact Force Estimation Paw, a compact and sensorized shoe designed for legged robots. The paw end-effector is made of silicon that deforms upon the application of contact forces, while an embedded micro camera is utilized to capture images of the deformed inner surface inside the shoe, and a microphone picks up audio signals. Processed through machine learning techniques, the images are mapped to compute an accurate estimate of the cumulative 3D force vector, while the audio signals are analyzed to identify the terrain class (e.g., gravel, snow). By leveraging its on-edge computation ability, the paw enhances the capabilities of legged robots by providing key information in real-time that can be used to adapt locomotion control strategies. To assess the performance of this novel sensorized paw, we conducted experiments on the data collected through a specially-designed testbed for force estimation, as well as data from recordings of the audio signatures of different terrains interacting with the paw. The results demonstrate the accuracy and effectiveness of the system, highlighting its potential for improving legged robot performance.
Abstract:The potential of Martian lava tubes for resource extraction and habitat sheltering highlights the need for robots capable to undertake the grueling task of their exploration. Driven by this motivation, in this work we introduce a legged robot system optimized for jumping in the low gravity of Mars, designed with leg configurations adaptable to both bipedal and quadrupedal systems. This design utilizes torque-controlled actuators coupled with springs for high-power jumping, robust locomotion, and an energy-efficient resting pose. Key design features include a 5-bar mechanism as leg concept, combined with springs connected by a high-strength cord. The selected 5-bar link lengths and spring stiffness were optimized for maximizing the jump height in Martian gravity and realized as a robot leg. Two such legs combined with a compact body allowed jump testing of a bipedal prototype. The robot is 0.472 m tall and weighs 7.9 kg. Jump testing with significant safety margins resulted in a measured jump height of 1.141 m in Earth's gravity, while a total of 4 jumping experiments are presented. Simulations utilizing the full motor torque and kinematic limits of the design resulted in a maximum possible jump height of 1.52 m in Earth's gravity and 3.63 m in Mars' gravity, highlighting the versatility of jumping as a form of locomotion and overcoming obstacles in lower gravity.