This paper proposes a perception-shared and swarm trajectory global optimal (STGO) algorithm fused UAVs formation motion planning framework aided by an active sensing system. First, the point cloud received by each UAV is fit by the gaussian mixture model (GMM) and transmitted in the swarm. Resampling from the received GMM contributes to a global map, which is used as the foundation for consensus. Second, to improve flight safety, an active sensing system is designed to plan the observation angle of each UAV considering the unknown field, overlap of the field of view (FOV), velocity direction and smoothness of yaw rotation, and this planning problem is solved by the distributed particle swarm optimization (DPSO) algorithm. Last, for the formation motion planning, to ensure obstacle avoidance, the formation structure is allowed for affine transformation and is treated as the soft constraint on the control points of the B-spline. Besides, the STGO is introduced to avoid local minima. The combination of GMM communication and STGO guarantees a safe and strict consensus between UAVs. Tests on different formations in the simulation show that our algorithm can contribute to a strict consensus and has a success rate of at least 80% for obstacle avoidance in a dense environment. Besides, the active sensing system can increase the success rate of obstacle avoidance from 50% to 100% in some scenarios.
Dynamic occupancy maps were proposed in recent years to model the obstacles in dynamic environments. Among these maps, the particle-based map offers a solid theoretical basis and the ability to model complex-shaped obstacles. Current particle-based maps describe the occupancy status in discrete grid form and suffer from the grid size problem, namely: large grid size is unfavorable for path planning while small grid size lowers efficiency and causes gaps and inconsistencies. To tackle this problem, this paper generalizes the particle-based map into continuous space and builds an efficient 3D local map. A dual-structure subspace division paradigm, composed of a voxel subspace division and a novel pyramid-like subspace division, is proposed to propagate particles and update the map efficiently with the consideration of occlusions. The occupancy status of an arbitrary point can then be estimated with the cardinality expectation. To reduce the noise in modeling static and dynamic obstacles simultaneously, an initial velocity estimation approach and a mixture model are utilized. Experimental results show that our map can effectively and efficiently model both dynamic obstacles and static obstacles. Compared to the state-of-the-art grid-form particle-based map, our map enables continuous occupancy estimation and substantially improves the performance in different resolutions. We also deployed the map on a quadrotor to demonstrate the bright prospect of using this map in obstacle avoidance tasks of small-scale robotics systems.
Obstacle avoidance of quadrotors in dynamic environments, with both static and dynamic obstacles, is still a very open problem. Current works commonly leverage traditional static maps to represent static obstacles and the detection and tracking of moving objects (DATMO) method to model dynamic obstacles separately. The dynamic obstacles are pre-trained in the detector and can only be modeled with certain shapes, such as cylinders or ellipsoids. This work utilizes our dual-structure particle-based (DSP) dynamic occupancy map to represent the arbitrary-shaped static obstacles and dynamic obstacles simultaneously and proposes an efficient risk-aware sampling-based local trajectory planner to realize safe flights in this map. The trajectory is planned by sampling motion primitives generated in the state space. Each motion primitive is divided into two phases: short-term planning with a strict risk limitation and relatively long-term planning designed to avoid high-risk regions. The risk is evaluated with the predicted future occupancy status, represented by particles, considering the time dimension. With an approach to split from and merge to global trajectories, the planner can also be used with an arbitrary preplanned global trajectory. Comparison experiments show that the obstacle avoidance system composed of the DSP map and our planner performs the best in dynamic environments. In real-world tests, our quadrotor reaches a speed of 6 m/s with the motion capture system and 2 m/s with everything computed on a low-price single board computer.
Spinning LiDAR data are prevalent for 3D perception tasks, yet its cylindrical image form is less studied. Conventional approaches regard scans as point clouds, and they either rely on expensive Euclidean 3D nearest neighbor search for data association or depend on projected range images for further processing. We revisit the LiDAR scan formation and present a cylindrical range image representation for data from raw scans, equipped with an efficient calibrated spherical projective model. With our formulation, we 1) collect a large dataset of LiDAR data consisting of both indoor and outdoor sequences accompanied with pseudo-ground truth poses; 2) evaluate the projective and conventional registration approaches on the sequences with both synthetic and real-world transformations; 3) transfer state-of-the-art RGB-D algorithms to LiDAR that runs up to 180 Hz for registration and 150 Hz for dense reconstruction. The dataset and tools will be released.
With the great success of pre-trained models, the pretrain-then-finetune paradigm has been widely adopted on downstream tasks for source code understanding. However, compared to costly training a large-scale model from scratch, how to effectively adapt pre-trained models to a new task has not been fully explored. In this paper, we propose an approach to bridge pre-trained models and code-related tasks. We exploit semantic-preserving transformation to enrich downstream data diversity, and help pre-trained models learn semantic features invariant to these semantically equivalent transformations. Further, we introduce curriculum learning to organize the transformed data in an easy-to-hard manner to fine-tune existing pre-trained models. We apply our approach to a range of pre-trained models, and they significantly outperform the state-of-the-art models on tasks for source code understanding, such as algorithm classification, code clone detection, and code search. Our experiments even show that without heavy pre-training on code data, natural language pre-trained model RoBERTa fine-tuned with our lightweight approach could outperform or rival existing code pre-trained models fine-tuned on the above tasks, such as CodeBERT and GraphCodeBERT. This finding suggests that there is still much room for improvement in code pre-trained models.
We present ASH, a modern and high-performance framework for parallel spatial hashing on GPU. Compared to existing GPU hash map implementations, ASH achieves higher performance, supports richer functionality, and requires fewer lines of code (LoC) when used for implementing spatially varying operations from volumetric geometry reconstruction to differentiable appearance reconstruction. Unlike existing GPU hash maps, the ASH framework provides a versatile tensor interface, hiding low-level details from the users. In addition, by decoupling the internal hashing data structures and key-value data in buffers, we offer direct access to spatially varying data via indices, enabling seamless integration to modern libraries such as PyTorch. To achieve this, we 1) detach stored key-value data from the low-level hash map implementation; 2) bridge the pointer-first low level data structures to index-first high-level tensor interfaces via an index heap; 3) adapt both generic and non-generic integer-only hash map implementations as backends to operate on multi-dimensional keys. We first profile our hash map against state-of-the-art hash maps on synthetic data to show the performance gain from this architecture. We then show that ASH can consistently achieve higher performance on various large-scale 3D perception tasks with fewer LoC by showcasing several applications, including 1) point cloud voxelization, 2) dense volumetric SLAM, 3) non-rigid point cloud registration and volumetric deformation, and 4) spatially varying geometry and appearance refinement. ASH and its example applications are open sourced in Open3D (http://www.open3d.org).
Relative localization is a prerequisite for the cooperation of aerial swarms. The vision-based approach has been investigated owing to its scalability and independence on communication. However, the limited field of view (FOV) inherently restricts the performance of vision-based relative localization. Inspired by bird flocks in nature, this letter proposes a novel distributed active vision-based relative localization framework for formation control in aerial swarms. Aiming at improving observation quality and formation accuracy, we devise graph-based attention planning (GAP) to determine the active observation scheme in the swarm. Then active detection results are fused with onboard measurements from UWB and VIO to obtain real-time relative positions, which further improve the formation control performance. Real-world experiments show that the proposed active vision system enables the swarm agents to achieve agile flocking movements with an acceleration of 4 $m/s^2$ in circular formation tasks. A 45.3 % improvement of formation accuracy has been achieved compared with the fixed vision system.
Flying robots such as the quadrotor could provide an efficient approach for medical treatment or sensor placing of wild animals. In these applications, continuously targeting the moving animal is a crucial requirement. Due to the underactuated characteristics of the quadrotor and the coupled kinematics with the animal, nonlinear optimal tracking approaches, other than smooth feedback control, are required. However, with severe nonlinearities, it would be time-consuming to evaluate control inputs, and real-time tracking may not be achieved with generic optimizers onboard. To tackle this problem, a novel efficient egocentric regulation approach with high computational efficiency is proposed in this paper. Specifically, it directly formulates the optimal tracking problem in an egocentric manner regarding the quadrotor's body coordinates. Meanwhile, the nonlinearities of the system are peeled off through a mapping of the feedback states as well as control inputs, between the inertial and body coordinates. In this way, the proposed efficient egocentric regulator only requires solving a quadratic performance objective with linear constraints and then generate control inputs analytically. Comparative simulations and mimic biological experiment are carried out to verify the effectiveness and computational efficiency. Results demonstrate that the proposed control approach presents the highest and stablest computational efficiency than generic optimizers on different platforms. Particularly, on a commonly utilized onboard computer, our method can compute the control action in approximately 0.3 ms, which is on the order of 350 times faster than that of generic nonlinear optimizers, establishing a control frequency around 3000 Hz.
With the rapid increase in the amount of public code repositories, developers maintain a great desire to retrieve precise code snippets by using natural language. Despite existing deep learning based approaches(e.g., DeepCS and MMAN) have provided the end-to-end solutions (i.e., accepts natural language as queries and shows related code fragments retrieved directly from code corpus), the accuracy of code search in the large-scale repositories is still limited by the code representation (e.g., AST) and modeling (e.g., directly fusing the features in the attention stage). In this paper, we propose a novel learnable deep Graph for Code Search (calleddeGraphCS), to transfer source code into variable-based flow graphs based on the intermediate representation technique, which can model code semantics more precisely compared to process the code as text directly or use the syntactic tree representation. Furthermore, we propose a well-designed graph optimization mechanism to refine the code representation, and apply an improved gated graph neural network to model variable-based flow graphs. To evaluate the effectiveness of deGraphCS, we collect a large-scale dataset from GitHub containing 41,152 code snippets written in C language, and reproduce several typical deep code search methods for comparison. Besides, we design a qualitative user study to verify the practical value of our approach. The experimental results have shown that deGraphCS can achieve state-of-the-art performances, and accurately retrieve code snippets satisfying the needs of the users.
We present self-supervised geometric perception (SGP), the first general framework to learn a feature descriptor for correspondence matching without any ground-truth geometric model labels (e.g., camera poses, rigid transformations). Our first contribution is to formulate geometric perception as an optimization problem that jointly optimizes the feature descriptor and the geometric models given a large corpus of visual measurements (e.g., images, point clouds). Under this optimization formulation, we show that two important streams of research in vision, namely robust model fitting and deep feature learning, correspond to optimizing one block of the unknown variables while fixing the other block. This analysis naturally leads to our second contribution -- the SGP algorithm that performs alternating minimization to solve the joint optimization. SGP iteratively executes two meta-algorithms: a teacher that performs robust model fitting given learned features to generate geometric pseudo-labels, and a student that performs deep feature learning under noisy supervision of the pseudo-labels. As a third contribution, we apply SGP to two perception problems on large-scale real datasets, namely relative camera pose estimation on MegaDepth and point cloud registration on 3DMatch. We demonstrate that SGP achieves state-of-the-art performance that is on-par or superior to the supervised oracles trained using ground-truth labels.