Alert button
Picture for Loïc Prenevost

Loïc Prenevost

Alert button

Resource-Constrained Station-Keeping for Helium Balloons using Reinforcement Learning

Mar 02, 2023
Jack Saunders, Loïc Prenevost, Özgür Şimşek, Alan Hunter, Wenbin Li

Figure 1 for Resource-Constrained Station-Keeping for Helium Balloons using Reinforcement Learning
Figure 2 for Resource-Constrained Station-Keeping for Helium Balloons using Reinforcement Learning
Figure 3 for Resource-Constrained Station-Keeping for Helium Balloons using Reinforcement Learning
Figure 4 for Resource-Constrained Station-Keeping for Helium Balloons using Reinforcement Learning

High altitude balloons have proved useful for ecological aerial surveys, atmospheric monitoring, and communication relays. However, due to weight and power constraints, there is a need to investigate alternate modes of propulsion to navigate in the stratosphere. Very recently, reinforcement learning has been proposed as a control scheme to maintain the balloon in the region of a fixed location, facilitated through diverse opposing wind-fields at different altitudes. Although air-pump based station keeping has been explored, there is no research on the control problem for venting and ballasting actuated balloons, which is commonly used as a low-cost alternative. We show how reinforcement learning can be used for this type of balloon. Specifically, we use the soft actor-critic algorithm, which on average is able to station-keep within 50\;km for 25\% of the flight, consistent with state-of-the-art. Furthermore, we show that the proposed controller effectively minimises the consumption of resources, thereby supporting long duration flights. We frame the controller as a continuous control reinforcement learning problem, which allows for a more diverse range of trajectories, as opposed to current state-of-the-art work, which uses discrete action spaces. Furthermore, through continuous control, we can make use of larger ascent rates which are not possible using air-pumps. The desired ascent-rate is decoupled into desired altitude and time-factor to provide a more transparent policy, compared to low-level control commands used in previous works. Finally, by applying the equations of motion, we establish appropriate thresholds for venting and ballasting to prevent the agent from exploiting the environment. More specifically, we ensure actions are physically feasible by enforcing constraints on venting and ballasting.

Viaarxiv icon