In recent years, mobile robots are becoming ambitious and deployed in large-scale scenarios. Serving as a high-level understanding of environments, a sparse skeleton graph is beneficial for more efficient global planning. Currently, existing solutions for skeleton graph generation suffer from several major limitations, including poor adaptiveness to different map representations, dependency on robot inspection trajectories and high computational overhead. In this paper, we propose an efficient and flexible algorithm generating a trajectory-independent 3D sparse topological skeleton graph capturing the spatial structure of the free space. In our method, an efficient ray sampling and validating mechanism are adopted to find distinctive free space regions, which contributes to skeleton graph vertices, with traversability between adjacent vertices as edges. A cycle formation scheme is also utilized to maintain skeleton graph compactness. Benchmark comparison with state-of-the-art works demonstrates that our approach generates sparse graphs in a substantially shorter time, giving high-quality global planning paths. Experiments conducted in real-world maps further validate the capability of our method in real-world scenarios. Our method will be made open source to benefit the community.
The challenges of learning a robust 6D pose function lie in 1) severe occlusion and 2) systematic noises in depth images. Inspired by the success of point-pair features, the goal of this paper is to recover the 6D pose of an object instance segmented from RGB-D images by locally matching pairs of oriented points between the model and camera space. To this end, we propose a novel Bi-directional Correspondence Mapping Network (BiCo-Net) to first generate point clouds guided by a typical pose regression, which can thus incorporate pose-sensitive information to optimize generation of local coordinates and their normal vectors. As pose predictions via geometric computation only rely on one single pair of local oriented points, our BiCo-Net can achieve robustness against sparse and occluded point clouds. An ensemble of redundant pose predictions from locally matching and direct pose regression further refines final pose output against noisy observations. Experimental results on three popularly benchmarking datasets can verify that our method can achieve state-of-the-art performance, especially for the more challenging severe occluded scenes. Source codes are available at https://github.com/Gorilla-Lab-SCUT/BiCo-Net.
Despite recent progress of robotic exploration, most methods assume that drift-free localization is available, which is problematic in reality and causes severe distortion of the reconstructed map. In this work, we present a systematic exploration mapping and planning framework that deals with drifted localization, allowing efficient and globally consistent reconstruction. A real-time re-integration-based mapping approach along with a frame pruning mechanism is proposed, which rectifies map distortion effectively when drifted localization is corrected upon detecting loop-closure. Besides, an exploration planning method considering historical viewpoints is presented to enable active loop closing, which promotes a higher opportunity to correct localization errors and further improves the mapping quality. We evaluate both the mapping and planning methods as well as the entire system comprehensively in simulation and real-world experiments, showing their effectiveness in practice. The implementation of the proposed method will be made open-source for the benefit of the robotics community.
Contrastive self-supervised learning has attracted significant research attention recently. It learns effective visual representations from unlabeled data by embedding augmented views of the same image close to each other while pushing away embeddings of different images. Despite its great success on ImageNet classification, COCO object detection, etc., its performance degrades on contrast-agnostic applications, e.g., medical image classification, where all images are visually similar to each other. This creates difficulties in optimizing the embedding space as the distance between images is rather small. To solve this issue, we present the first mix-up self-supervised learning framework for contrast-agnostic applications. We address the low variance across images based on cross-domain mix-up and build the pretext task based on two synergistic objectives: image reconstruction and transparency prediction. Experimental results on two benchmark datasets validate the effectiveness of our method, where an improvement of 2.5% ~ 7.4% in top-1 accuracy was obtained compared to existing self-supervised learning methods.
Nowadays, multirotors are playing important roles in abundant types of missions. During these missions, entering confined and narrow tunnels that are barely accessible to humans is desirable yet extremely challenging for multirotors. The restricted space and significant ego airflow disturbances induce control issues at both fast and slow flight speeds, meanwhile bringing about problems in state estimation and perception. Thus, a smooth trajectory at a proper speed is necessary for safe tunnel flights. To address these challenges, in this letter, a complete autonomous aerial system that can fly smoothly through tunnels with dimensions narrow to 0.6 m is presented. The system contains a motion planner that generates smooth mini-jerk trajectories along the tunnel center lines, which are extracted according to the map and Euclidean Distance Field (EDF), and its practical speed range is obtained through computational fluid dynamics (CFD) and flight data analyses. Extensive flight experiments on the quadrotor are conducted inside multiple narrow tunnels to validate the planning framework as well as the robustness of the whole system.
Data augmentation has always been an effective way to overcome overfitting issue when the dataset is small. There are already lots of augmentation operations such as horizontal flip, random crop or even Mixup. However, unlike image classification task, we cannot simply perform these operations for object detection task because of the lack of labeled bounding boxes information for corresponding generated images. To address this challenge, we propose a framework making use of Generative Adversarial Networks(GAN) to perform unsupervised data augmentation. To be specific, based on the recently supreme performance of YOLOv4, we propose a two-step pipeline that enables us to generate an image where the object lies in a certain position. In this way, we can accomplish the goal that generating an image with bounding box label.
The decentralized state estimation is one of the most fundamental components for autonomous aerial swarm systems in GPS-denied areas, which still remains a highly challenging research topic. To address this research niche, the Omni-swarm, a decentralized omnidirectional visual-inertial-UWB state estimation system for the aerial swarm is proposed in this paper. In order to solve the issues of observability, complicated initialization, insufficient accuracy and lack of global consistency, we introduce an omnidirectional perception system as the front-end of the Omni-swarm, consisting of omnidirectional sensors, which includes stereo fisheye cameras and ultra-wideband (UWB) sensors, and algorithms, which includes fisheye visual inertial odometry (VIO), multi-drone map-based localization and visual object detector. A graph-based optimization and forward propagation working as the back-end of the Omni-swarm to fuse the measurements from the front-end. According to the experiment result, the proposed decentralized state estimation method on the swarm system achieves centimeter-level relative state estimation accuracy while ensuring global consistency. Moreover, supported by the Omni-swarm, inter-drone collision avoidance can be accomplished in a whole decentralized scheme without any external device, demonstrating the potential of Omni-swarm to be the foundation of autonomous aerial swarm flights in different scenarios.
In this paper, we present a provably correct controller synthesis approach for switched stochastic control systems with metric temporal logic (MTL) specifications with provable probabilistic guarantees. We first present the stochastic control bisimulation function for switched stochastic control systems, which bounds the trajectory divergence between the switched stochastic control system and its nominal deterministic control system in a probabilistic fashion. We then develop a method to compute optimal control inputs by solving an optimization problem for the nominal trajectory of the deterministic control system with robustness against initial state variations and stochastic uncertainties. We implement our robust stochastic controller synthesis approach on both a four-bus power system and a nine-bus power system under generation loss disturbances, with MTL specifications expressing requirements for the grid frequency deviations, wind turbine generator rotor speed variations and the power flow constraints at different power lines.
The collaboration of unmanned aerial vehicles (UAVs), also known as aerial swarm, has become a popular research topic for its practicality and flexibility in plenty of scenarios. However, one of the most fundamental components for autonomous aerial swarm systems in GPS-denied areas, the robust decentralized relative state estimation, remains to be an extremely challenging research topic. In order to address this research niche, the Omni-swarm, an aerial swarm system with decentralized Omni-directional visual-inertial-UWB state estimation, which features robustness, accuracy, and global consistency, is proposed in this paper. We introduce a map-based localization method using deep learning tools to perform relative localization and re-localization within the aerial swarm while achieving the fast initialization and maintaining the global consistency of state estimation. Furthermore, to overcome the sensors' visibility issues with the limited field of view (FoV), which severely affect the performance of the state estimation, Omni-directional sensors, including fisheye cameras and ultra-wideband (UWB) sensors, are adopted. The state estimation module, together with the planning and the control modules, is integrated on the aerial system with Omni-directional sensors to attain the Omni-swarm, and extensive experiments are performed to verify the validity and examine the performance of the proposed framework. According to the experiment result, the proposed framework can achieve centimeter-level relative state estimation accuracy while ensuring global consistency.