Despite seminal advances in reinforcement learning in recent years, many domains where the rewards are sparse, e.g. given only at task completion, remain quite challenging. In such cases, it can be beneficial to tackle the task both from its beginning and end, and make the two ends meet. Existing approaches that do so, however, are not effective in the common scenario where the strategy needed near the end goal is very different from the one that is effective earlier on. In this work we propose a novel RL approach for such settings. In short, we first train a backward-looking agent with a simple relaxed goal, and then augment the state representation of the forward-looking agent with straightforward hint features. This allows the learned forward agent to leverage information from backward plans, without mimicking their policy. We demonstrate the efficacy of our approach on the challenging game of Sokoban, where we substantially surpass learned solvers that generalize across levels, and are competitive with SOTA performance of the best highly-crafted systems. Impressively, we achieve these results while learning from a small number of practice levels and using simple RL techniques.
In some puzzles, the strategy we need to use near the goal can be quite different from the strategy that is effective earlier on, e.g. due to a smaller branching factor near the exit state in a maze. A common approach in these cases is to apply both a forward and a backward search, and to try and align the two. In this work we propose an approach that takes this idea a step forward, within a reinforcement learning (RL) framework. Training a traditional forward-looking agent using RL can be difficult because rewards are often sparse, e.g. given only at the goal. Instead, we first train a backward-looking agent with a simple relaxed goal. We then augment the state representation of the puzzle with straightforward hint features that are extracted from the behavior of that agent. Finally, we train a forward looking agent with this informed augmented state. We demonstrate that this simple "access" to partial backward plans leads to a substantial performance boost. On the challenging domain of the Sokoban puzzle, our RL approach substantially surpasses the best learned solvers that generalize over levels, and is competitive with SOTA performance of the best highly-crafted solution. Impressively, we achieve these results while learning from only a small number of practice levels and using simple RL techniques.