This article presents an architecture for multi-agent task allocation and task execution, through the unification of a market-inspired task-auctioning system with Behavior Trees for managing and executing lower level behaviors. We consider the scenario with multi-stage tasks, such as 'pick and place', whose arrival times are not known a priori. In such a scenario, a coordinating architecture is expected to be reactive to newly arrived tasks and the resulting rerouting of agents should be dependent on the stage of completion of their current multi-stage tasks. In the novel architecture proposed in this article, a central auctioning system gathers bids (cost-estimates for completing currently available tasks) from all agents, and solves a combinatorial problem to optimally assign tasks to agents. For every agent, it's participation in the auctioning system and execution of an assigned multi-stage task is managed using behavior trees, which switch among several well-defined behaviors in response to changing scenarios. The auctioning system is run at a fixed rate, allowing for newly added tasks to be incorporated into the auctioning system, which makes the solution reactive and allows for the rerouting of some agents (subject to the states of the behavior trees). We demonstrate that the proposed architecture is especially well-suited for multi-stage tasks, where high costs are incurred when rerouting agents who have completed one or more stages of their current tasks. The scalability analysis of the proposed architecture reveals that it scales well with the number of agents and number of tasks. The proposed framework is experimentally validated in multiple scenarios in a lab environment. A video of a demonstration can be viewed at: https://youtu.be/ZdEkoOOlB2g}.
With the ever growing number of space debris in orbit, the need to prevent further space population is becoming more and more apparent. Refueling, servicing, inspection and deorbiting of spacecraft are some example missions that require precise navigation and docking in space. Having multiple, collaborating robots handling these tasks can greatly increase the efficiency of the mission in terms of time and cost. This article will introduce a modern and efficient control architecture for satellites on collaborative docking missions. The proposed architecture uses a centralized scheme that combines state-of-the-art, ad-hoc implementations of algorithms and techniques to maximize robustness and flexibility. It is based on a Model Predictive Controller (MPC) for which efficient cost function and constraint sets are designed to ensure a safe and accurate docking. A simulation environment is also presented to validate and test the proposed control scheme.
Autonomous navigation of robots in harsh and GPS denied subterranean (SubT) environments with lack of natural or poor illumination is a challenging task that fosters the development of algorithms for pose estimation and mapping. Inspired by the need for real-life deployment of autonomous robots in such environments, this article presents an experimental comparative study of 3D SLAM algorithms. The study focuses on state-of-the-art Lidar SLAM algorithms with open-source implementation that are i) lidar-only like BLAM, LOAM, A-LOAM, ISC-LOAM and hdl graph slam, or ii) lidar-inertial like LeGO-LOAM, Cartographer, LIO-mapping and LIO-SAM. The evaluation of the methods is performed based on a dataset collected from the Boston Dynamics Spot robot equipped with 3D lidar Velodyne Puck Lite and IMU Vectornav VN-100, during a mission in an underground tunnel. In the evaluation process poses and 3D tunnel reconstructions from SLAM algorithms are compared against each other to find methods with most solid performance in terms of pose accuracy and map quality.
In recent years, cloud and edge architectures have gained tremendous focus for offloading computationally heavy applications. From machine learning and Internet of Thing (IOT) to industrial procedures and robotics, cloud computing have been used extensively for data processing and storage purposes, thanks to its "infinite" resources. On the other hand, cloud computing is characterized by long time delays due to the long distance between the cloud servers and the machine requesting the resources. In contrast, edge computing provides almost real-time services since edge servers are located significantly closer to the source of data. This capability sets edge computing as an ideal option for real-time applications, like high level control, for resource-constrained platforms. In order to utilize the edge resources, several technologies, with basic ones as containers and orchestrators like Kubernetes, have been developed to provide an environment with many features, based on each application's requirements. In this context, this works presents the implementation and evaluation of a novel edge architecture based on Kubernetes orchestration for controlling the trajectory of a resource-constrained Unmanned Aerial Vehicle (UAV) by enabling Model Predictive Control (MPC).
With the advent of technologies such as Edge computing, the horizons of remote computational applications have broadened multidimensionally. Autonomous Unmanned Aerial Vehicle (UAV) mission is a vital application to utilize remote computation to catalyze its performance. However, offloading computational complexity to a remote system increases the latency in the system. Though technologies such as 5G networking minimize communication latency, the effects of latency on the control of UAVs are inevitable and may destabilize the system. Hence, it is essential to consider the delays in the system and compensate for them in the control design. Therefore, we propose a novel Edge-based predictive control architecture enabled by 5G networking, PACED-5G (Predictive Autonomous Control using Edge for Drones over 5G). In the proposed control architecture, we have designed a state estimator for estimating the current states based on the available knowledge of the time-varying delays, devised a Model Predictive controller (MPC) for the UAV to track the reference trajectory while avoiding obstacles, and provided an interface to offload the high-level tasks over Edge systems. The proposed architecture is validated in two experimental test cases using a quadrotor UAV.
This article presents a 3D point cloud map-merging framework for egocentric heterogeneous multi-robot exploration, based on overlap detection and alignment, that is independent of a manual initial guess or prior knowledge of the robots' poses. The novel proposed solution utilizes state-of-the-art place recognition learned descriptors, that through the framework's main pipeline, offer a fast and robust region overlap estimation, hence eliminating the need for the time-consuming global feature extraction and feature matching process that is typically used in 3D map integration. The region overlap estimation provides a homogeneous rigid transform that is applied as an initial condition in the point cloud registration algorithm Fast-GICP, which provides the final and refined alignment. The efficacy of the proposed framework is experimentally evaluated based on multiple field multi-robot exploration missions in underground environments, where both ground and aerial robots are deployed, with different sensor configurations.
Edge computing is becoming more and more popular among researchers who seek to take advantage of the edge resources and the minimal time delays, in order to run their robotic applications more efficiently. Recently, many edge architectures have been proposed, each of them having their advantages and disadvantages, depending on each application. In this work, we present two different edge architectures for controlling the trajectory of an Unmanned Aerial Vehicle (UAV). The first architecture is based on docker containers and the second one is based on kubernetes, while the main framework for operating the robot is the Robotic Operating System (ROS). The efficiency of the overall proposed scheme is being evaluated through extended simulations for comparing the two architectures and the overall results obtained.
Mapping and exploration of a Martian terrain with an aerial vehicle has become an emerging research direction, since the successful flight demonstration of the Mars helicopter Ingenuity. Although the autonomy and navigation capability of the state of the art Mars helicopter has proven to be efficient in an open environment, the next area of interest for exploration on Mars are caves or ancient lava tube like environments, especially towards the never-ending search of life on other planets. This article presents an autonomous exploration mission based on a modified frontier approach along with a risk aware planning and integrated collision avoidance scheme with a special focus on energy aspects of a custom designed Mars Coaxial Quadrotor (MCQ) in a Martian simulated lava tube. One of the biggest novelties of the article stems from addressing the exploration capability, while rapidly exploring in local areas and intelligently global re-positioning of the MCQ when reaching dead ends in order to to efficiently use the battery based consumed energy, while increasing the volume of the exploration. The proposed three layer cost based global re-position point selection assists in rapidly redirecting the MCQ to previously partially seen areas that could lead to more unexplored part of the lava tube. The Martian fully simulated mission presented in this article takes into consideration the fidelity of physics of Mars condition in terms of thin atmosphere, low surface pressure and low gravity of the planet, while proves the efficiency of the proposed scheme in exploring an area that is particularly challenging due to the subterranean-like environment. The proposed exploration-planning framework is also validated in simulation by comparing it against the graph based exploration planner.
Current global re-localization algorithms are built on top of localization and mapping methods and heavily rely on scan matching and direct point cloud feature extraction and therefore are vulnerable in featureless demanding environments like caves and tunnels. In this article, we propose a novel global re-localization framework that: a) does not require an initial guess, like most methods do, while b) it has the capability to offer the top-k candidates to choose from and last but not least provides an event-based re-localization trigger module for enabling, and c) supporting completely autonomous robotic missions. With the focus on subterranean environments with low features, we opt to use descriptors based on range images from 3D LiDAR scans in order to maintain the depth information of the environment. In our novel approach, we make use of a state-of-the-art data-driven descriptor extraction framework for place recognition and orientation regression and enhance it with the addition of a junction detection module that also utilizes the descriptors for classification purposes.