Abstract:Efficient mission planning for cooperative systems involving Unmanned Aerial Vehicles (UAVs) and Unmanned Ground Vehicles (UGVs) requires addressing energy constraints, scalability, and coordination challenges between agents. UAVs excel in rapidly covering large areas but are constrained by limited battery life, while UGVs, with their extended operational range and capability to serve as mobile recharging stations, are hindered by slower speeds. This heterogeneity makes coordination between UAVs and UGVs critical for achieving optimal mission outcomes. In this work, we propose a scalable deep reinforcement learning (DRL) framework to address the energy-constrained cooperative routing problem for multi-agent UAV-UGV teams, aiming to visit a set of task points in minimal time with UAVs relying on UGVs for recharging during the mission. The framework incorporates sortie-wise agent switching to efficiently manage multiple agents, by allocating task points and coordinating actions. Using an encoder-decoder transformer architecture, it optimizes routes and recharging rendezvous for the UAV-UGV team in the task scenario. Extensive computational experiments demonstrate the framework's superior performance over heuristic methods and a DRL baseline, delivering significant improvements in solution quality and runtime efficiency across diverse scenarios. Generalization studies validate its robustness, while dynamic scenario highlights its adaptability to real-time changes with a case study. This work advances UAV-UGV cooperative routing by providing a scalable, efficient, and robust solution for multi-agent mission planning.
Abstract:Unmanned Aerial Vehicles (UAVs), although adept at aerial surveillance, are often constrained by limited battery capacity. By refueling on slow-moving Unmanned Ground Vehicles (UGVs), their operational endurance can be significantly enhanced. This paper explores the computationally complex problem of cooperative UAV-UGV routing for vast area surveillance within the speed and fuel constraints, presenting a sequential multi-agent planning framework for achieving feasible and optimally satisfactory solutions. By considering the UAV fuel limits and utilizing a minimum set cover algorithm, we determine UGV refueling stops, which in turn facilitate UGV route planning at the first step and through a task allocation technique and energy constrained vehicle routing problem modeling with time windows (E-VRPTW) we achieve the UAV route at the second step of the framework. The effectiveness of our multi-agent strategy is demonstrated through the implementation on 30 different task scenarios across 3 different scales. This work offers significant insight into the collaborative advantages of UAV-UGV systems and introduces heuristic approaches to bypass computational challenges and swiftly reach high-quality solutions.