Abstract:Advancing Multi-Agent Pathfinding (MAPF) and Multi-Robot Motion Planning (MRMP) requires platforms that enable transparent, reproducible comparisons across modeling choices. Existing tools either scale under simplifying assumptions (grids, homogeneous agents) or offer higher fidelity with less comparable instrumentation. We present GRACE, a unified 2D simulator+benchmark that instantiates the same task at multiple abstraction levels (grid, roadmap, continuous) via explicit, reproducible operators and a common evaluation protocol. Our empirical results on public maps and representative planners enable commensurate comparisons on a shared instance set. Furthermore, we quantify the expected representation-fidelity trade-offs (MRMP solves instances at higher fidelity but lower speed, while grid/roadmap planners scale farther). By consolidating representation, execution, and evaluation, GRACE thereby aims to make cross-representation studies more comparable and provides a means to advance multi-robot planning research and its translation to practice.
Abstract:In recent years, reinforcement learning (RL) has shown great potential for solving tasks in well-defined environments like games or robotics. This paper aims to solve the robotic reaching task in a simulation run on the Neurorobotics Platform (NRP). The target position is initialized randomly and the robot has 6 degrees of freedom. We compare the performance of various state-of-the-art model-free algorithms. At first, the agent is trained on ground truth data from the simulation to reach the target position in only one continuous movement. Later the complexity of the task is increased by using image data as input from the simulation environment. Experimental results show that training efficiency and results can be improved with appropriate dynamic training schedule function for curriculum learning.