Alert button
Picture for Brendan Englot

Brendan Englot

Alert button

Stevens Institute of Technology

Mobile Manipulation Platform for Autonomous Indoor Inspections in Low-Clearance Areas

Sep 19, 2023
Erik Pearson, Paul Szenher, Christine Huang, Brendan Englot

Mobile manipulators have been used for inspection, maintenance and repair tasks over the years, but there are some key limitations. Stability concerns typically require mobile platforms to be large in order to handle far-reaching manipulators, or for the manipulators to have drastically reduced workspaces to fit onto smaller mobile platforms. Therefore we propose a combination of two widely-used robots, the Clearpath Jackal unmanned ground vehicle and the Kinova Gen3 six degree-of-freedom manipulator. The Jackal has a small footprint and works well in low-clearance indoor environments. Extensive testing of localization, navigation and mapping using LiDAR sensors makes the Jackal a well developed mobile platform suitable for mobile manipulation. The Gen3 has a long reach with reasonable power consumption for manipulation tasks. A wrist camera for RGB-D sensing and a customizable end effector interface makes the Gen3 suitable for a myriad of manipulation tasks. Typically these features would result in an unstable platform, however with a few minor hardware and software modifications, we have produced a stable, high-performance mobile manipulation platform with significant mobility, reach, sensing, and maneuverability for indoor inspection tasks, without degradation of the component robots' individual capabilities. These assertions were investigated with hardware via semi-autonomous navigation to waypoints in a busy indoor environment, and high-precision self-alignment alongside planar structures for intervention tasks.

* 5 pages, 7 figures, to be published in IDETC-CIE 2023 
Viaarxiv icon

A Robust and Rapidly Deployable Waypoint Navigation Architecture for Long-Duration Operations in GPS-Denied Environments

Aug 10, 2023
Erik Pearson, Brendan Englot

For long-duration operations in GPS-denied environments, accurate and repeatable waypoint navigation is an essential capability. While simultaneous localization and mapping (SLAM) works well for single-session operations, repeated, multi-session operations require robots to navigate to the same spot(s) accurately and precisely each and every time. Localization and navigation errors can build up from one session to the next if they are not accounted for. Localization using a global reference map works well, but there are no publicly available packages for quickly building maps and navigating with them. We propose a new architecture using a combination of two publicly available packages with a newly released package to create a fully functional multi-session navigation system for ground vehicles. The system takes just a few hours from the beginning of the first manual scan to perform autonomous waypoint navigation.

* 8 pages, 7 figures, Ubiquitous Robots 2023 
Viaarxiv icon

Robust Unmanned Surface Vehicle Navigation with Distributional Reinforcement Learning

Jul 30, 2023
Xi Lin, John McConnell, Brendan Englot

Autonomous navigation of Unmanned Surface Vehicles (USV) in marine environments with current flows is challenging, and few prior works have addressed the sensorbased navigation problem in such environments under no prior knowledge of the current flow and obstacles. We propose a Distributional Reinforcement Learning (RL) based local path planner that learns return distributions which capture the uncertainty of action outcomes, and an adaptive algorithm that automatically tunes the level of sensitivity to the risk in the environment. The proposed planner achieves a more stable learning performance and converges to safer policies than a traditional RL based planner. Computational experiments demonstrate that comparing to a traditional RL based planner and classical local planning methods such as Artificial Potential Fields and the Bug Algorithm, the proposed planner is robust against environmental flows, and is able to plan trajectories that are superior in safety, time and energy consumption.

* The 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023) 
Viaarxiv icon

Robust Route Planning with Distributional Reinforcement Learning in a Stochastic Road Network Environment

Apr 19, 2023
Xi Lin, Paul Szenher, John D. Martin, Brendan Englot

Figure 1 for Robust Route Planning with Distributional Reinforcement Learning in a Stochastic Road Network Environment
Figure 2 for Robust Route Planning with Distributional Reinforcement Learning in a Stochastic Road Network Environment
Figure 3 for Robust Route Planning with Distributional Reinforcement Learning in a Stochastic Road Network Environment
Figure 4 for Robust Route Planning with Distributional Reinforcement Learning in a Stochastic Road Network Environment

Route planning is essential to mobile robot navigation problems. In recent years, deep reinforcement learning (DRL) has been applied to learning optimal planning policies in stochastic environments without prior knowledge. However, existing works focus on learning policies that maximize the expected return, the performance of which can vary greatly when the level of stochasticity in the environment is high. In this work, we propose a distributional reinforcement learning based framework that learns return distributions which explicitly reflect environmental stochasticity. Policies based on the second-order stochastic dominance (SSD) relation can be used to make adjustable route decisions according to user preference on performance robustness. Our proposed method is evaluated in a simulated road network environment, and experimental results show that our method is able to plan the shortest routes that minimize stochasticity in travel time when robustness is preferred, while other state-of-the-art DRL methods are agnostic to environmental stochasticity.

* The 20th International Conference on Ubiquitous Robots (UR 2023) 
Viaarxiv icon

Monocular Simultaneous Localization and Mapping using Ground Textures

Mar 10, 2023
Kyle M. Hart, Brendan Englot, Ryan P. O'Shea, John D. Kelly, David Martinez

Figure 1 for Monocular Simultaneous Localization and Mapping using Ground Textures
Figure 2 for Monocular Simultaneous Localization and Mapping using Ground Textures
Figure 3 for Monocular Simultaneous Localization and Mapping using Ground Textures
Figure 4 for Monocular Simultaneous Localization and Mapping using Ground Textures

Recent work has shown impressive localization performance using only images of ground textures taken with a downward facing monocular camera. This provides a reliable navigation method that is robust to feature sparse environments and challenging lighting conditions. However, these localization methods require an existing map for comparison. Our work aims to relax the need for a map by introducing a full simultaneous localization and mapping (SLAM) system. By not requiring an existing map, setup times are minimized and the system is more robust to changing environments. This SLAM system uses a combination of several techniques to accomplish this. Image keypoints are identified and projected into the ground plane. These keypoints, visual bags of words, and several threshold parameters are then used to identify overlapping images and revisited areas. The system then uses robust M-estimators to estimate the transform between robot poses with overlapping images and revisited areas. These optimized estimates make up the map used for navigation. We show, through experimental data, that this system performs reliably on many ground textures, but not all.

* 7 pages, 9 figures. To appear at ICRA 2023, London, UK. Distribution Statement A: Approved for public release; distribution is unlimited, as submitted under NAVAIR Public Release Authorization 2022-0586. The views expressed here are those of the authors and do not reflect the official policy or position of the U.S. Navy, Department of Defense, or U.S. Government 
Viaarxiv icon

DRACo-SLAM: Distributed Robust Acoustic Communication-efficient SLAM for Imaging Sonar Equipped Underwater Robot Teams

Oct 03, 2022
John McConnell, Yewei Huang, Paul Szenher, Ivana Collado-Gonzalez, Brendan Englot

Figure 1 for DRACo-SLAM: Distributed Robust Acoustic Communication-efficient SLAM for Imaging Sonar Equipped Underwater Robot Teams
Figure 2 for DRACo-SLAM: Distributed Robust Acoustic Communication-efficient SLAM for Imaging Sonar Equipped Underwater Robot Teams
Figure 3 for DRACo-SLAM: Distributed Robust Acoustic Communication-efficient SLAM for Imaging Sonar Equipped Underwater Robot Teams
Figure 4 for DRACo-SLAM: Distributed Robust Acoustic Communication-efficient SLAM for Imaging Sonar Equipped Underwater Robot Teams

An essential task for a multi-robot system is generating a common understanding of the environment and relative poses between robots. Cooperative tasks can be executed only when a vehicle has knowledge of its own state and the states of the team members. However, this has primarily been achieved with direct rendezvous between underwater robots, via inter-robot ranging. We propose a novel distributed multi-robot simultaneous localization and mapping (SLAM) framework for underwater robots using imaging sonar-based perception. By passing only scene descriptors between robots, we do not need to pass raw sensor data unless there is a likelihood of inter-robot loop closure. We utilize pairwise consistent measurement set maximization (PCM), making our system robust to erroneous loop closures. The functionality of our system is demonstrated using two real-world datasets, one with three robots and another with two robots. We show that our system effectively estimates the trajectories of the multi-robot system and keeps the bandwidth requirements of inter-robot communication low. To our knowledge, this paper describes the first instance of multi-robot SLAM using real imaging sonar data (which we implement offline, using simulated communication). Code link: https://github.com/jake3991/DRACo-SLAM.

* To appear at IROS 2022 in Kyoto, Japan 
Viaarxiv icon

A Fully-autonomous Framework of Unmanned Surface Vehicles in Maritime Environments using Gaussian Process Motion Planning

Apr 22, 2022
Jiawei Meng, Ankita Humne, Richard Bucknall1, Brendan Englot, Yuanchang Liu

Figure 1 for A Fully-autonomous Framework of Unmanned Surface Vehicles in Maritime Environments using Gaussian Process Motion Planning
Figure 2 for A Fully-autonomous Framework of Unmanned Surface Vehicles in Maritime Environments using Gaussian Process Motion Planning
Figure 3 for A Fully-autonomous Framework of Unmanned Surface Vehicles in Maritime Environments using Gaussian Process Motion Planning
Figure 4 for A Fully-autonomous Framework of Unmanned Surface Vehicles in Maritime Environments using Gaussian Process Motion Planning

Unmanned surface vehicles (USVs) are of increasing importance to a growing number of sectors in the maritime industry, including offshore exploration, marine transportation and defence operations. A major factor in the growth in use and deployment of USVs is the increased operational flexibility that is offered through use of autonomous navigation systems that generate optimised trajectories. Unlike path planning in terrestrial environments, planning in the maritime environment is more demanding as there is need to assure mitigating action is taken against the significant, random and often unpredictable environmental influences from winds and ocean currents. With the focus of these necessary requirements as the main basis of motivation, this paper proposes a novel motion planner, denoted as GPMP2*, extending the application scope of the fundamental GP-based motion planner, GPMP2, into complex maritime environments. An interpolation strategy based on Monte-Carlo stochasticity has been innovatively added to GPMP2* to produce a new algorithm named GPMP2* with Monte-Carlo stochasticity (MC-GPMP2*), which can increase the diversity of the paths generated. In parallel with algorithm design, a ROS based fully-autonomous framework for an advanced unmanned surface vehicle, the WAM-V 20 USV, has been proposed. The practicability of the proposed motion planner as well as the fully-autonomous framework have been functionally validated in a simulated inspection missions for an offshore wind farm in ROS.

* 14 pages, 13 figures 
Viaarxiv icon

Virtual Maps for Autonomous Exploration of Cluttered Underwater Environments

Feb 16, 2022
Jinkun Wang, Fanfei Chen, Yewei Huang, John McConnell, Tixiao Shan, Brendan Englot

Figure 1 for Virtual Maps for Autonomous Exploration of Cluttered Underwater Environments
Figure 2 for Virtual Maps for Autonomous Exploration of Cluttered Underwater Environments
Figure 3 for Virtual Maps for Autonomous Exploration of Cluttered Underwater Environments
Figure 4 for Virtual Maps for Autonomous Exploration of Cluttered Underwater Environments

We consider the problem of autonomous mobile robot exploration in an unknown environment, taking into account a robot's coverage rate, map uncertainty, and state estimation uncertainty. This paper presents a novel exploration framework for underwater robots operating in cluttered environments, built upon simultaneous localization and mapping (SLAM) with imaging sonar. The proposed system comprises path generation, place recognition forecasting, belief propagation and utility evaluation using a virtual map, which estimates the uncertainty associated with map cells throughout a robot's workspace. We evaluate the performance of this framework in simulated experiments, showing that our algorithm maintains a high coverage rate during exploration while also maintaining low mapping and localization error. The real-world applicability of our framework is also demonstrated on an underwater remotely operated vehicle (ROV) exploring a harbor environment.

* Preprint; Accepted for publication in the IEEE Journal of Oceanic Engineering 
Viaarxiv icon

Overhead Image Factors for Underwater Sonar-based SLAM

Feb 11, 2022
John McConnell, Fanfei Chen, Brendan Englot

Figure 1 for Overhead Image Factors for Underwater Sonar-based SLAM
Figure 2 for Overhead Image Factors for Underwater Sonar-based SLAM
Figure 3 for Overhead Image Factors for Underwater Sonar-based SLAM
Figure 4 for Overhead Image Factors for Underwater Sonar-based SLAM

Simultaneous localization and mapping (SLAM) is a critical capability for any autonomous underwater vehicle (AUV). However, robust, accurate state estimation is still a work in progress when using low-cost sensors. We propose enhancing a typical low-cost sensor package using widely available and often free prior information; overhead imagery. Given an AUV's sonar image and a partially overlapping, globally-referenced overhead image, we propose using a convolutional neural network (CNN) to generate a synthetic overhead image predicting the above-surface appearance of the sonar image contents. We then use this synthetic overhead image to register our observations to the provided global overhead image. Once registered, the transformation is introduced as a factor into a pose SLAM factor graph. We use a state-of-the-art simulation environment to perform validation over a series of benchmark trajectories and quantitatively show the improved accuracy of robot state estimation using the proposed approach. We also show qualitative outcomes from a real AUV field deployment. Video attachment: https://youtu.be/_uWljtp58ks

* To appear in RA-L 2022 and presented at ICRA 2022 
Viaarxiv icon

Zero-Shot Reinforcement Learning on Graphs for Autonomous Exploration Under Uncertainty

May 11, 2021
Fanfei Chen, Paul Szenher, Yewei Huang, Jinkun Wang, Tixiao Shan, Shi Bai, Brendan Englot

Figure 1 for Zero-Shot Reinforcement Learning on Graphs for Autonomous Exploration Under Uncertainty
Figure 2 for Zero-Shot Reinforcement Learning on Graphs for Autonomous Exploration Under Uncertainty
Figure 3 for Zero-Shot Reinforcement Learning on Graphs for Autonomous Exploration Under Uncertainty
Figure 4 for Zero-Shot Reinforcement Learning on Graphs for Autonomous Exploration Under Uncertainty

This paper studies the problem of autonomous exploration under localization uncertainty for a mobile robot with 3D range sensing. We present a framework for self-learning a high-performance exploration policy in a single simulation environment, and transferring it to other environments, which may be physical or virtual. Recent work in transfer learning achieves encouraging performance by domain adaptation and domain randomization to expose an agent to scenarios that fill the inherent gaps in sim2sim and sim2real approaches. However, it is inefficient to train an agent in environments with randomized conditions to learn the important features of its current state. An agent can use domain knowledge provided by human experts to learn efficiently. We propose a novel approach that uses graph neural networks in conjunction with deep reinforcement learning, enabling decision-making over graphs containing relevant exploration information provided by human experts to predict a robot's optimal sensing action in belief space. The policy, which is trained only in a single simulation environment, offers a real-time, scalable, and transferable decision-making strategy, resulting in zero-shot transfer to other simulation environments and even real-world environments.

Viaarxiv icon