LIACC, New York University Abu Dhabi
Abstract:Action selection based on flags is a Reinforcement Learning (RL) exploration policy that improves the exploration of the state space through the use of flags, which can identify the most promising actions to take in each state. The quantum counterpart of this exploration policy further improves upon this by taking advantage of a quadratic speedup for sampling flagged actions. This approach has already been successfully employed for the game of Checkers. In this work, we describe the application of this method to the context of Connect Four, in order to study its performance in a different setting, which can lead to a better generalization of the technique. We also kept track of a metric that wasn't taken into account in previous work: the average number of iterations to obtain a flagged action. Since going second is a significant disadvantage in Connect Four, we also had the intent of exploring how this more complex scenario would impact the performance of our approach. The experiments involved training and testing classical and quantum RL agents that played either going first or going second against a Randomized Negamax opponent. The results showed that both flagged exploration policies were clearly superior to a simple epsilon-greedy policy. Furthermore, the quantum agents did in fact sample flagged actions in less iterations. Despite obtaining tagged actions more consistently, the win rates between the classical and quantum versions of the approach were identical, which could be due to the simplicity of the training scenario chosen.
Abstract:The rapid adoption of Electric Vehicles (EVs) poses challenges for electricity grids to accommodate or mitigate peak demand. Vehicle-to-Vehicle Charging (V2VC) has been recently adopted by popular EVs, posing new opportunities and challenges to the management and operation of EVs. We present a novel V2VC model that allows decision-makers to take V2VC into account when optimizing their EV operations. We show that optimizing V2VC is NP-Complete and find that even small problem instances are computationally challenging. We propose R-V2VC, a heuristic that takes advantage of the resulting totally unimodular constraint matrix to efficiently solve problems of realistic sizes. Our results demonstrate that R-V2VC presents a linear growth in the solution time as the problem size increases, while achieving solutions of optimal or near-optimal quality. R-V2VC can be used for real-world operations and to study what-if scenarios when evaluating the costs and benefits of V2VC.