Abstract:Offline reinforcement learning (RL) is an attractive tool for unmanned aerial vehicle (UAV) systems, where online exploration is costly and raises safety concerns. In terrain-aware UAV relaying, agents may observe high-dimensional inputs such as terrain and land-cover maps, which describe the propagation environment, but complicate offline learning from fixed datasets. This paper investigates the impact of compact state representations on offline RL for UAV relaying. End-to-end service is jointly constrained by UAV--user access links and a base-station--to--UAV backhaul link, yielding feasibility limits driven by user mobility and independent of UAV control. To distinguish feasibility limits from control-induced sub-optimality, a candidate-set feasibility upper bound (CS-FUB) is introduced, which estimates the maximum achievable user coverage over a restricted set of UAV placements. To address high-dimensional terrain context, map-like observations are compressed into low-dimensional latent representations using a variational autoencoder (VAE) and policies are trained via Conservative Q-Learning (CQL). Simulation results show that training CQL directly on raw high-dimensional terrain-context states leads to slow convergence and large feasibility gaps. In contrast, VAE-encoded representations improve learning stability, enable earlier convergence to feasible relay configurations, and reduce sub-optimality relative to physical limits. Comparisons with autoencoder and linear compression baselines further demonstrate the benefit of structured representation learning for effective offline RL in terrain-aware UAV systems.