Alert button
Picture for Mostafa Kotb

Mostafa Kotb

Alert button

Model Predictive Control with Self-supervised Representation Learning

Apr 14, 2023
Jonas Matthies, Muhammad Burhan Hafez, Mostafa Kotb, Stefan Wermter

Figure 1 for Model Predictive Control with Self-supervised Representation Learning
Figure 2 for Model Predictive Control with Self-supervised Representation Learning
Figure 3 for Model Predictive Control with Self-supervised Representation Learning
Figure 4 for Model Predictive Control with Self-supervised Representation Learning

Over the last few years, we have not seen any major developments in model-free or model-based learning methods that would make one obsolete relative to the other. In most cases, the used technique is heavily dependent on the use case scenario or other attributes, e.g. the environment. Both approaches have their own advantages, for example, sample efficiency or computational efficiency. However, when combining the two, the advantages of each can be combined and hence achieve better performance. The TD-MPC framework is an example of this approach. On the one hand, a world model in combination with model predictive control is used to get a good initial estimate of the value function. On the other hand, a Q function is used to provide a good long-term estimate. Similar to algorithms like MuZero a latent state representation is used, where only task-relevant information is encoded to reduce the complexity. In this paper, we propose the use of a reconstruction function within the TD-MPC framework, so that the agent can reconstruct the original observation given the internal state representation. This allows our agent to have a more stable learning signal during training and also improves sample efficiency. Our proposed addition of another loss term leads to improved performance on both state- and image-based tasks from the DeepMind-Control suite.

Viaarxiv icon

Sample-efficient Real-time Planning with Curiosity Cross-Entropy Method and Contrastive Learning

Mar 07, 2023
Mostafa Kotb, Cornelius Weber, Stefan Wermter

Figure 1 for Sample-efficient Real-time Planning with Curiosity Cross-Entropy Method and Contrastive Learning
Figure 2 for Sample-efficient Real-time Planning with Curiosity Cross-Entropy Method and Contrastive Learning
Figure 3 for Sample-efficient Real-time Planning with Curiosity Cross-Entropy Method and Contrastive Learning
Figure 4 for Sample-efficient Real-time Planning with Curiosity Cross-Entropy Method and Contrastive Learning

Model-based reinforcement learning (MBRL) with real-time planning has shown great potential in locomotion and manipulation control tasks. However, the existing planning methods, such as the Cross-Entropy Method (CEM), do not scale well to complex high-dimensional environments. One of the key reasons for underperformance is the lack of exploration, as these planning methods only aim to maximize the cumulative extrinsic reward over the planning horizon. Furthermore, planning inside the compact latent space in the absence of observations makes it challenging to use curiosity-based intrinsic motivation. We propose Curiosity CEM (CCEM), an improved version of the CEM algorithm for encouraging exploration via curiosity. Our proposed method maximizes the sum of state-action Q values over the planning horizon, in which these Q values estimate the future extrinsic and intrinsic reward, hence encouraging reaching novel observations. In addition, our model uses contrastive representation learning to efficiently learn latent representations. Experiments on image-based continuous control tasks from the DeepMind Control suite show that CCEM is by a large margin more sample-efficient than previous MBRL algorithms and compares favorably with the best model-free RL methods.

* 7 pages, 4 figures 
Viaarxiv icon