Adaptive autonomous navigation with no prior knowledge of extraneous disturbance is of great significance for quadrotors in a complex and unknown environment. The mainstream that considers external disturbance is to implement disturbance-rejected control and path tracking. However, the robust control to compensate for tracking deviations is not well-considered regarding energy consumption, and even the reference path will become risky and intractable with disturbance. As recent external forces estimation advances, it is possible to incorporate a real-time force estimator to develop more robust and safe planning frameworks. This paper proposes a systematic (re)planning framework that can resiliently generate safe trajectories under volatile conditions. Firstly, a front-end kinodynamic path is searched with force-biased motion primitives. Then we develop a nonlinear model predictive control (NMPC) as a local planner with Hamilton-Jacobi (HJ) forward reachability analysis for error dynamics caused by external forces. It guarantees collision-free by constraining the ellipsoid of the quadrotor body expanded with the forward reachable sets (FRSs) within safe convex polytopes. Our method is validated in simulations and real-world experiments with different sources of external forces.
Recently, the community has witnessed numerous datasets built for developing and testing state estimators. However, for some applications such as aerial transportation or search-and-rescue, the contact force or other disturbance must be perceived for robust planning robust control, which is beyond the capacity of these datasets. This paper introduces a Visual-Inertial-Dynamical(VID) dataset, not only focusing on traditional six degrees of freedom (6DOF) pose estimation but also providing dynamical characteristics of the flight platform for external force perception or dynamics-aided estimation. The VID dataset contains hard synchronized imagery and inertial measurements, with accurate ground truth trajectories for evaluating common visual-inertial estimators. Moreover, the proposed dataset highlights the measurements of rotor speed and motor current, dynamical inputs, and ground truth 6-axis force data to evaluate external force estimation. To the best of our knowledge, the proposed VID dataset is the first public dataset containing visual-inertial and complete dynamical information for pose and external force evaluation. The dataset and related open source files are available at \url{https://github.com/ZJU-FAST-Lab/VID-Dataset}.
The visibility of targets determines performance and even success rate of various applications, such as active slam, exploration, and target tracking. Therefore, it is crucial to take the visibility of targets into explicit account in trajectory planning. In this paper, we propose a general metric for target visibility, considering observation distance and angle as well as occlusion effect. We formulate this metric into a differentiable visibility cost function, with which spatial trajectory and yaw can be jointly optimized. Furthermore, this visibility-aware trajectory optimization handles dynamic feasibility of position and yaw simultaneously. To validate that our method is practical and generic, we integrate it into a customized quadrotor tracking system. The experimental results show that our visibility-aware planner performs more robustly and observes targets better. In order to benefit related researches, we release our code to the public.
In recent years, several progressive works promote the development of aerial tracking. One of the representative works is our previous work Fast-tracker which is applicable to various challenging tracking scenarios. However, it suffers from two main drawbacks: 1) the over simplification in target detection by using artificial markers and 2) the contradiction between simultaneous target and environment perception with limited onboard vision. In this paper, we upgrade the target detection in Fast-tracker to detect and localize a human target based on deep learning and non-linear regression to solve the former problem. For the latter one, we equip the quadrotor system with 360 degree active vision on a customized gimbal camera. Furthermore, we improve the tracking trajectory planning in Fast-tracker by incorporating an occlusion-aware mechanism that generates observable tracking trajectories. Comprehensive real-world tests confirm the proposed system's robustness and real-time capability. Benchmark comparisons with Fast-tracker validate that the proposed system presents better tracking performance even when performing more difficult tracking tasks.
The development of aerial autonomy has enabled aerial robots to fly agilely in complex environments. However, dodging fast-moving objects in flight remains a challenge, limiting the further application of unmanned aerial vehicles (UAVs). The bottleneck of solving this problem is the accurate perception of rapid dynamic objects. Recently, event cameras have shown great potential in solving this problem. This paper presents a complete perception system including ego-motion compensation, object detection, and trajectory prediction for fast-moving dynamic objects with low latency and high precision. Firstly, we propose an accurate ego-motion compensation algorithm by considering both rotational and translational motion for more robust object detection. Then, for dynamic object detection, an event camera-based efficient regression algorithm is designed. Finally, we propose an optimizationbased approach that asynchronously fuses event and depth cameras for trajectory prediction. Extensive real-world experiments and benchmarks are performed to validate our framework. Moreover, our code will be released to benefit related researches.
In this paper, we introduce a complete system for autonomous flight of quadrotors in dynamic environments with onboard sensing. Extended from existing work, we develop an occlusion-aware dynamic perception method based on depth images, which classifies obstacles as dynamic and static. For representing generic dynamic environment, we model dynamic objects with moving ellipsoids and fuse static ones into an occupancy grid map. To achieve dynamic avoidance, we design a planning method composed of modified kinodynamic path searching and gradient-based optimization. The method leverages manually constructed gradients without maintaining a signed distance field (SDF), making the planning procedure finished in milliseconds. We integrate the above methods into a customized quadrotor system and thoroughly test it in realworld experiments, verifying its effective collision avoidance in dynamic environments.
Neural network pruning is an essential approach for reducing the computational complexity of deep models so that they can be well deployed on resource-limited devices. Compared with conventional methods, the recently developed dynamic pruning methods determine redundant filters variant to each input instance which achieves higher acceleration. Most of the existing methods discover effective sub-networks for each instance independently and do not utilize the relationship between different inputs. To maximally excavate redundancy in the given network architecture, this paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks (dubbed as ManiDP). We first investigate the recognition complexity and feature similarity between images in the training set. Then, the manifold relationship between instances and the pruned sub-networks will be aligned in the training procedure. The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost compared to the state-of-the-art methods. For example, our method can reduce 55.3% FLOPs of ResNet-34 with only 0.57% top-1 accuracy degradation on ImageNet.
For real-time multirotor kinodynamic motion planning, the efficiency of sampling-based methods is usually hindered by difficult-to-sample homotopy classes like narrow passages. In this paper, we address this issue by a hybrid scheme. We firstly propose a fast regional optimizer exploiting the information of local environments and then integrate it into a global sampling process to ensure faster convergence. The incorporation of local optimization on different sampling-based methods shows significantly improved success rates and less planning time in various types of challenging environments. We also present a refinement module that fully investigates the resulting trajectory of the global sampling and greatly improves its smoothness with negligible computation effort. Benchmark results illustrate that compared to the state-of-the-art ones, our proposed method can better exploit a previous trajectory. The planning methods are applied to generate trajectories for a simulated quadrotor system, and its capability is validated in real-time applications.
We present an optimization-based framework for multicopter trajectory planning subject to geometrical spatial constraints and user-defined dynamic constraints. The basis of the framework is a novel trajectory representation built upon our novel optimality conditions for unconstrained control effort minimization. We design linear-complexity operations on this representation to conduct spatial-temporal deformation under various planning requirements. Smooth maps are utilized to exactly eliminate geometrical constraints in a lightweight fashion. A wide range of state-input constraints are supported by the decoupling of dense constraint evaluation from sparse parameterization, and backward differentiation of flatness map. As a result, the proposed framework transforms a generally constrained multicopter planning problem into an unconstrained optimization that can be solved reliably and efficiently. Our framework bridges the gaps among solution quality, planning frequency and constraint fidelity for a multicopter with limited resources and maneuvering capability. Its generality and robustness are both demonstrated by applications and experiments for different tasks. Extensive simulations and benchmarks are also conducted to show its capability of generating high-quality solutions while retaining the computation speed against other specialized methods by orders of magnitudes. Details and source code of our framework will be freely available at: http://zju-fast.com/gcopter.