Alert button
Picture for Guido C. H. E. de Croon

Guido C. H. E. de Croon

Alert button

AOSoar: Autonomous Orographic Soaring of a Micro Air Vehicle

Aug 01, 2023
Sunyou Hwang, Bart D. W. Remes, Guido C. H. E. de Croon

Figure 1 for AOSoar: Autonomous Orographic Soaring of a Micro Air Vehicle
Figure 2 for AOSoar: Autonomous Orographic Soaring of a Micro Air Vehicle
Figure 3 for AOSoar: Autonomous Orographic Soaring of a Micro Air Vehicle
Figure 4 for AOSoar: Autonomous Orographic Soaring of a Micro Air Vehicle

Utilizing wind hovering techniques of soaring birds can save energy expenditure and improve the flight endurance of micro air vehicles (MAVs). Here, we present a novel method for fully autonomous orographic soaring without a priori knowledge of the wind field. Specifically, we devise an Incremental Nonlinear Dynamic Inversion (INDI) controller with control allocation, adapting it for autonomous soaring. This allows for both soaring and the use of the throttle if necessary, without changing any gain or parameter during the flight. Furthermore, we propose a simulated-annealing-based optimization method to search for soaring positions. This enables for the first time an MAV to autonomously find a feasible soaring position while minimizing throttle usage and other control efforts. Autonomous orographic soaring was performed in the wind tunnel. The wind speed and incline of a ramp were changed during the soaring flight. The MAV was able to perform autonomous orographic soaring for flight times of up to 30 minutes. The mean throttle usage was only 0.25% for the entire soaring flight, whereas normal powered flight requires 38%. Also, it was shown that the MAV can find a new soaring spot when the wind field changes during the flight.

* 8 pages, 11 figures, accepted to IROS 2023 
Viaarxiv icon

Autonomous Control for Orographic Soaring of Fixed-Wing UAVs

May 23, 2023
Tom Suys, Sunyou Hwang, Guido C. H. E. de Croon, Bart D. W. Remes

Figure 1 for Autonomous Control for Orographic Soaring of Fixed-Wing UAVs
Figure 2 for Autonomous Control for Orographic Soaring of Fixed-Wing UAVs
Figure 3 for Autonomous Control for Orographic Soaring of Fixed-Wing UAVs
Figure 4 for Autonomous Control for Orographic Soaring of Fixed-Wing UAVs

We present a novel controller for fixed-wing UAVs that enables autonomous soaring in an orographic wind field, extending flight endurance. Our method identifies soaring regions and addresses position control challenges by introducing a target gradient line (TGL) on which the UAV achieves an equilibrium soaring position, where sink rate and updraft are balanced. Experimental testing validates the controller's effectiveness in maintaining autonomous soaring flight without using any thrust in a non-static wind field. We also demonstrate a single degree of control freedom in a soaring position through manipulation of the TGL.

* 6+1 pages, 9 figures, accepted to ICRA 2023 
Viaarxiv icon

Optimality Principles in Spacecraft Neural Guidance and Control

May 22, 2023
Dario Izzo, Emmanuel Blazquez, Robin Ferede, Sebastien Origer, Christophe De Wagter, Guido C. H. E. de Croon

Figure 1 for Optimality Principles in Spacecraft Neural Guidance and Control
Figure 2 for Optimality Principles in Spacecraft Neural Guidance and Control
Figure 3 for Optimality Principles in Spacecraft Neural Guidance and Control
Figure 4 for Optimality Principles in Spacecraft Neural Guidance and Control

Spacecraft and drones aimed at exploring our solar system are designed to operate in conditions where the smart use of onboard resources is vital to the success or failure of the mission. Sensorimotor actions are thus often derived from high-level, quantifiable, optimality principles assigned to each task, utilizing consolidated tools in optimal control theory. The planned actions are derived on the ground and transferred onboard where controllers have the task of tracking the uploaded guidance profile. Here we argue that end-to-end neural guidance and control architectures (here called G&CNets) allow transferring onboard the burden of acting upon these optimality principles. In this way, the sensor information is transformed in real time into optimal plans thus increasing the mission autonomy and robustness. We discuss the main results obtained in training such neural architectures in simulation for interplanetary transfers, landings and close proximity operations, highlighting the successful learning of optimality principles by the neural model. We then suggest drone racing as an ideal gym environment to test these architectures on real robotic platforms, thus increasing confidence in their utilization on future space exploration missions. Drone racing shares with spacecraft missions both limited onboard computational capabilities and similar control structures induced from the optimality principle sought, but it also entails different levels of uncertainties and unmodelled effects. Furthermore, the success of G&CNets on extremely resource-restricted drones illustrates their potential to bring real-time optimal control within reach of a wider variety of robotic systems, both in space and on Earth.

Viaarxiv icon

Guidance & Control Networks for Time-Optimal Quadcopter Flight

May 04, 2023
Sebastien Origer, Christophe De Wagter, Robin Ferede, Guido C. H. E. de Croon, Dario Izzo

Figure 1 for Guidance & Control Networks for Time-Optimal Quadcopter Flight
Figure 2 for Guidance & Control Networks for Time-Optimal Quadcopter Flight
Figure 3 for Guidance & Control Networks for Time-Optimal Quadcopter Flight
Figure 4 for Guidance & Control Networks for Time-Optimal Quadcopter Flight

Reaching fast and autonomous flight requires computationally efficient and robust algorithms. To this end, we train Guidance & Control Networks to approximate optimal control policies ranging from energy-optimal to time-optimal flight. We show that the policies become more difficult to learn the closer we get to the time-optimal 'bang-bang' control profile. We also assess the importance of knowing the maximum angular rotor velocity of the quadcopter and show that over- or underestimating this limit leads to less robust flight. We propose an algorithm to identify the current maximum angular rotor velocity onboard and a network that adapts its policy based on the identified limit. Finally, we extend previous work on Guidance & Control Networks by learning to take consecutive waypoints into account. We fly a 4x3m track in similar lap times as the differential-flatness-based minimum snap benchmark controller while benefiting from the flexibility that Guidance & Control Networks offer.

Viaarxiv icon

An Adaptive Control Strategy for Neural Network based Optimal Quadcopter Controllers

Apr 26, 2023
Robin Ferede, Guido C. H. E. de Croon, Christophe De Wagter, Dario Izzo

Figure 1 for An Adaptive Control Strategy for Neural Network based Optimal Quadcopter Controllers
Figure 2 for An Adaptive Control Strategy for Neural Network based Optimal Quadcopter Controllers
Figure 3 for An Adaptive Control Strategy for Neural Network based Optimal Quadcopter Controllers
Figure 4 for An Adaptive Control Strategy for Neural Network based Optimal Quadcopter Controllers

Developing optimal controllers for aggressive high-speed quadcopter flight is a major challenge in the field of robotics. Recent work has shown that neural networks trained with supervised learning can achieve real-time optimal control in some specific scenarios. In these methods, the networks (termed G&CNets) are trained to learn the optimal state feedback from a dataset of optimal trajectories. An important problem with these methods is the reality gap encountered in the sim-to-real transfer. In this work, we trained G&CNets for energy-optimal end-to-end control on the Bebop drone and identified the unmodeled pitch moment as the main contributor to the reality gap. To mitigate this, we propose an adaptive control strategy that works by learning from optimal trajectories of a system affected by constant external pitch, roll and yaw moments. In real test flights, this model mismatch is estimated onboard and fed to the network to obtain the optimal rpm command. We demonstrate the effectiveness of our method by performing energy-optimal hover-to-hover flights with and without moment feedback. Finally, we compare the adaptive controller to a state-of-the-art differential-flatness-based controller in a consecutive waypoint flight and demonstrate the advantages of our method in terms of energy optimality and robustness.

* 7 pages, 11 figures 
Viaarxiv icon

Neuromorphic computing for attitude estimation onboard quadrotors

Apr 18, 2023
Stein Stroobants, Julien Dupeyroux, Guido C. H. E. de Croon

Compelling evidence has been given for the high energy efficiency and update rates of neuromorphic processors, with performance beyond what standard Von Neumann architectures can achieve. Such promising features could be advantageous in critical embedded systems, especially in robotics. To date, the constraints inherent in robots (e.g., size and weight, battery autonomy, available sensors, computing resources, processing time, etc.), and particularly in aerial vehicles, severely hamper the performance of fully-autonomous on-board control, including sensor processing and state estimation. In this work, we propose a spiking neural network (SNN) capable of estimating the pitch and roll angles of a quadrotor in highly dynamic movements from 6-degree of freedom Inertial Measurement Unit (IMU) data. With only 150 neurons and a limited training dataset obtained using a quadrotor in a real world setup, the network shows competitive results as compared to state-of-the-art, non-neuromorphic attitude estimators. The proposed architecture was successfully tested on the Loihi neuromorphic processor on-board a quadrotor to estimate the attitude when flying. Our results show the robustness of neuromorphic attitude estimation and pave the way towards energy-efficient, fully autonomous control of quadrotors with dedicated neuromorphic computing systems.

* Neuromorphic Computing and Engineering 2.3 (2022): 034005  
Viaarxiv icon

Neuromorphic Control using Input-Weighted Threshold Adaptation

Apr 18, 2023
Stein Stroobants, Christophe De Wagter, Guido C. H. E. de Croon

Figure 1 for Neuromorphic Control using Input-Weighted Threshold Adaptation
Figure 2 for Neuromorphic Control using Input-Weighted Threshold Adaptation
Figure 3 for Neuromorphic Control using Input-Weighted Threshold Adaptation
Figure 4 for Neuromorphic Control using Input-Weighted Threshold Adaptation

Neuromorphic processing promises high energy efficiency and rapid response rates, making it an ideal candidate for achieving autonomous flight of resource-constrained robots. It will be especially beneficial for complex neural networks as are involved in high-level visual perception. However, fully neuromorphic solutions will also need to tackle low-level control tasks. Remarkably, it is currently still challenging to replicate even basic low-level controllers such as proportional-integral-derivative (PID) controllers. Specifically, it is difficult to incorporate the integral and derivative parts. To address this problem, we propose a neuromorphic controller that incorporates proportional, integral, and derivative pathways during learning. Our approach includes a novel input threshold adaptation mechanism for the integral pathway. This Input-Weighted Threshold Adaptation (IWTA) introduces an additional weight per synaptic connection, which is used to adapt the threshold of the post-synaptic neuron. We tackle the derivative term by employing neurons with different time constants. We first analyze the performance and limits of the proposed mechanisms and then put our controller to the test by implementing it on a microcontroller connected to the open-source tiny Crazyflie quadrotor, replacing the innermost rate controller. We demonstrate the stability of our bio-inspired algorithm with flights in the presence of disturbances. The current work represents a substantial step towards controlling highly dynamic systems with neuromorphic algorithms, thus advancing neuromorphic processing and robotics. In addition, integration is an important part of any temporal task, so the proposed Input-Weighted Threshold Adaptation (IWTA) mechanism may have implications well beyond control tasks.

Viaarxiv icon

Taming Contrast Maximization for Learning Sequential, Low-latency, Event-based Optical Flow

Mar 09, 2023
Federico Paredes-Vallés, Kirk Y. W. Scheper, Christophe De Wagter, Guido C. H. E. de Croon

Figure 1 for Taming Contrast Maximization for Learning Sequential, Low-latency, Event-based Optical Flow
Figure 2 for Taming Contrast Maximization for Learning Sequential, Low-latency, Event-based Optical Flow
Figure 3 for Taming Contrast Maximization for Learning Sequential, Low-latency, Event-based Optical Flow
Figure 4 for Taming Contrast Maximization for Learning Sequential, Low-latency, Event-based Optical Flow

Event cameras have recently gained significant traction since they open up new avenues for low-latency and low-power solutions to complex computer vision problems. To unlock these solutions, it is necessary to develop algorithms that can leverage the unique nature of event data. However, the current state-of-the-art is still highly influenced by the frame-based literature, and usually fails to deliver on these promises. In this work, we take this into consideration and propose a novel self-supervised learning pipeline for the sequential estimation of event-based optical flow that allows for the scaling of the models to high inference frequencies. At its core, we have a continuously-running stateful neural model that is trained using a novel formulation of contrast maximization that makes it robust to nonlinearities and varying statistics in the input events. Results across multiple datasets confirm the effectiveness of our method, which establishes a new state of the art in terms of accuracy for approaches trained or optimized without ground truth.

* 15 pages, 12 figures, 7 tables 
Viaarxiv icon

Lightweight Event-based Optical Flow Estimation via Iterative Deblurring

Nov 24, 2022
Yilun Wu, Federico Paredes-Vallés, Guido C. H. E. de Croon

Figure 1 for Lightweight Event-based Optical Flow Estimation via Iterative Deblurring
Figure 2 for Lightweight Event-based Optical Flow Estimation via Iterative Deblurring
Figure 3 for Lightweight Event-based Optical Flow Estimation via Iterative Deblurring
Figure 4 for Lightweight Event-based Optical Flow Estimation via Iterative Deblurring

Inspired by frame-based methods, state-of-the-art event-based optical flow networks rely on the explicit computation of correlation volumes, which are expensive to compute and store on systems with limited processing budget and memory. To this end, we introduce IDNet (Iterative Deblurring Network), a lightweight yet well-performing event-based optical flow network without using correlation volumes. IDNet leverages the unique spatiotemporally continuous nature of event streams to propose an alternative way of implicitly capturing correlation through iterative refinement and motion deblurring. Our network does not compute correlation volumes but rather utilizes a recurrent network to maximize the spatiotemporal correlation of events iteratively. We further propose two iterative update schemes: "ID" which iterates over the same batch of events, and "TID" which iterates over time with streaming events in an online fashion. Benchmark results show the former "ID" scheme can reach close to state-of-the-art performance with 33% of savings in compute and 90% in memory footprint, while the latter "TID" scheme is even more efficient promising 83% of compute savings and 15 times less latency at the cost of 18% of performance drop.

Viaarxiv icon

NanoFlowNet: Real-time Dense Optical Flow on a Nano Quadcopter

Sep 14, 2022
Rik J. Bouwmeester, Federico Paredes-Vallés, Guido C. H. E. de Croon

Figure 1 for NanoFlowNet: Real-time Dense Optical Flow on a Nano Quadcopter
Figure 2 for NanoFlowNet: Real-time Dense Optical Flow on a Nano Quadcopter
Figure 3 for NanoFlowNet: Real-time Dense Optical Flow on a Nano Quadcopter
Figure 4 for NanoFlowNet: Real-time Dense Optical Flow on a Nano Quadcopter

Nano quadcopters are small, agile, and cheap platforms that are well suited for deployment in narrow, cluttered environments. Due to their limited payload, these vehicles are highly constrained in processing power, rendering conventional vision-based methods for safe and autonomous navigation incompatible. Recent machine learning developments promise high-performance perception at low latency, while dedicated edge computing hardware has the potential to augment the processing capabilities of these limited devices. In this work, we present NanoFlowNet, a lightweight convolutional neural network for real-time dense optical flow estimation on edge computing hardware. We draw inspiration from recent advances in semantic segmentation for the design of this network. Additionally, we guide the learning of optical flow using motion boundary ground truth data, which improves performance with no impact on latency. Validation results on the MPI-Sintel dataset show the high performance of the proposed network given its constrained architecture. Additionally, we successfully demonstrate the capabilities of NanoFlowNet by deploying it on the ultra-low power GAP8 microprocessor and by applying it to vision-based obstacle avoidance on board a Bitcraze Crazyflie, a 34 g nano quadcopter.

* 8 pages, 9 figures, 4 tables 
Viaarxiv icon