Abstract:In November 2025, the authors ran a workshop on the topic of what makes a good reinforcement learning (RL) environment for autonomous cyber defence (ACD). This paper details the knowledge shared by participants both during the workshop and shortly afterwards by contributing herein. The workshop participants come from academia, industry, and government, and have extensive hands-on experience designing and working with RL and cyber environments. While there is now a sizeable body of literature describing work in RL for ACD, there is nevertheless a great deal of tradecraft, domain knowledge, and common hazards which are not detailed comprehensively in a single resource. With a specific focus on building better environments to train and evaluate autonomous RL agents in network defence scenarios, including government and critical infrastructure networks, the contributions of this work are twofold: (1) a framework for decomposing the interface between RL cyber environments and real systems, and (2) guidelines on current best practice for RL-based ACD environment development and agent evaluation, based on the key findings from our workshop.




Abstract:Power grid operation subject to an extreme event requires decision-making by human operators under stressful condition with high cognitive load. Decision support under adverse dynamic events, specially if forecasted, can be supplemented by intelligent proactive control. Power system operation during wildfires require resiliency-driven proactive control for load shedding, line switching and resource allocation considering the dynamics of the wildfire and failure propagation. However, possible number of line- and load-switching in a large system during an event make traditional prediction-driven and stochastic approaches computationally intractable, leading operators to often use greedy algorithms. We model and solve the proactive control problem as a Markov decision process and introduce an integrated testbed for spatio-temporal wildfire propagation and proactive power-system operation. We transform the enormous wildfire-propagation observation space and utilize it as part of a heuristic for proactive de-energization of transmission assets. We integrate this heuristic with a reinforcement-learning based proactive policy for controlling the generating assets. Our approach allows this controller to provide setpoints for a part of the generation fleet, while a myopic operator can determine the setpoints for the remaining set, which results in a symbiotic action. We evaluate our approach utilizing the IEEE 24-node system mapped on a hypothetical terrain. Our results show that the proposed approach can help the operator to reduce load loss during an extreme event, reduce power flow through lines that are to be de-energized, and reduce the likelihood of infeasible power-flow solutions, which would indicate violation of short-term thermal limits of transmission lines.