Abstract:We propose a contact-explicit hierarchical architecture coupling Reinforcement Learning (RL) and Model Predictive Control (MPC), where a high-level RL agent provides gait and navigation commands to a low-level locomotion MPC. This offloads the combinatorial burden of contact timing from the MPC by learning acyclic gaits through trial and error in simulation. We show that only a minimal set of rewards and limited tuning are required to obtain effective policies. We validate the architecture in simulation across robotic platforms spanning 50 kg to 120 kg and different MPC implementations, observing the emergence of acyclic gaits and timing adaptations in flat-terrain legged and hybrid locomotion, and further demonstrating extensibility to non-flat terrains. Across all platforms, we achieve zero-shot sim-to-sim transfer without domain randomization, and we further demonstrate zero-shot sim-to-real transfer without domain randomization on Centauro, our 120 kg wheeled-legged humanoid robot. We make our software framework and evaluation results publicly available at https://github.com/AndrePatri/AugMPC.

Abstract:In recent years Sim2Real approaches have brought great results to robotics. Techniques such as model-based learning or domain randomization can help overcome the gap between simulation and reality, but in some situations simulation accuracy is still needed. An example is agricultural robotics, which needs detailed simulations, both in terms of dynamics and visuals. However, simulation software is still not capable of such quality and accuracy. Current Sim2Real techniques are helpful in mitigating the problem, but for these specific tasks they are not enough.