Alert button
Picture for Muhammad Burhan Hafez

Muhammad Burhan Hafez

Alert button

Continual Robot Learning using Self-Supervised Task Inference

Sep 10, 2023
Muhammad Burhan Hafez, Stefan Wermter

Figure 1 for Continual Robot Learning using Self-Supervised Task Inference
Figure 2 for Continual Robot Learning using Self-Supervised Task Inference
Figure 3 for Continual Robot Learning using Self-Supervised Task Inference
Figure 4 for Continual Robot Learning using Self-Supervised Task Inference

Endowing robots with the human ability to learn a growing set of skills over the course of a lifetime as opposed to mastering single tasks is an open problem in robot learning. While multi-task learning approaches have been proposed to address this problem, they pay little attention to task inference. In order to continually learn new tasks, the robot first needs to infer the task at hand without requiring predefined task representations. In this paper, we propose a self-supervised task inference approach. Our approach learns action and intention embeddings from self-organization of the observed movement and effect parts of unlabeled demonstrations and a higher-level behavior embedding from self-organization of the joint action-intention embeddings. We construct a behavior-matching self-supervised learning objective to train a novel Task Inference Network (TINet) to map an unlabeled demonstration to its nearest behavior embedding, which we use as the task representation. A multi-task policy is built on top of the TINet and trained with reinforcement learning to optimize performance over tasks. We evaluate our approach in the fixed-set and continual multi-task learning settings with a humanoid robot and compare it to different multi-task learning baselines. The results show that our approach outperforms the other baselines, with the difference being more pronounced in the challenging continual learning setting, and can infer tasks from incomplete demonstrations. Our approach is also shown to generalize to unseen tasks based on a single demonstration in one-shot task generalization experiments.

* Accepted for publication in IEEE Transactions on Cognitive and Developmental Systems 
Viaarxiv icon

Map-based Experience Replay: A Memory-Efficient Solution to Catastrophic Forgetting in Reinforcement Learning

May 03, 2023
Muhammad Burhan Hafez, Tilman Immisch, Tom Weber, Stefan Wermter

Figure 1 for Map-based Experience Replay: A Memory-Efficient Solution to Catastrophic Forgetting in Reinforcement Learning
Figure 2 for Map-based Experience Replay: A Memory-Efficient Solution to Catastrophic Forgetting in Reinforcement Learning
Figure 3 for Map-based Experience Replay: A Memory-Efficient Solution to Catastrophic Forgetting in Reinforcement Learning
Figure 4 for Map-based Experience Replay: A Memory-Efficient Solution to Catastrophic Forgetting in Reinforcement Learning

Deep Reinforcement Learning agents often suffer from catastrophic forgetting, forgetting previously found solutions in parts of the input space when training on new data. Replay Memories are a common solution to the problem, decorrelating and shuffling old and new training samples. They naively store state transitions as they come in, without regard for redundancy. We introduce a novel cognitive-inspired replay memory approach based on the Grow-When-Required (GWR) self-organizing network, which resembles a map-based mental model of the world. Our approach organizes stored transitions into a concise environment-model-like network of state-nodes and transition-edges, merging similar samples to reduce the memory size and increase pair-wise distance among samples, which increases the relevancy of each sample. Overall, our paper shows that map-based experience replay allows for significant memory reduction with only small performance decreases.

* Accepted for publication in Frontiers in Neurorobotics 
Viaarxiv icon

Model Predictive Control with Self-supervised Representation Learning

Apr 14, 2023
Jonas Matthies, Muhammad Burhan Hafez, Mostafa Kotb, Stefan Wermter

Figure 1 for Model Predictive Control with Self-supervised Representation Learning
Figure 2 for Model Predictive Control with Self-supervised Representation Learning
Figure 3 for Model Predictive Control with Self-supervised Representation Learning
Figure 4 for Model Predictive Control with Self-supervised Representation Learning

Over the last few years, we have not seen any major developments in model-free or model-based learning methods that would make one obsolete relative to the other. In most cases, the used technique is heavily dependent on the use case scenario or other attributes, e.g. the environment. Both approaches have their own advantages, for example, sample efficiency or computational efficiency. However, when combining the two, the advantages of each can be combined and hence achieve better performance. The TD-MPC framework is an example of this approach. On the one hand, a world model in combination with model predictive control is used to get a good initial estimate of the value function. On the other hand, a Q function is used to provide a good long-term estimate. Similar to algorithms like MuZero a latent state representation is used, where only task-relevant information is encoded to reduce the complexity. In this paper, we propose the use of a reconstruction function within the TD-MPC framework, so that the agent can reconstruct the original observation given the internal state representation. This allows our agent to have a more stable learning signal during training and also improves sample efficiency. Our proposed addition of another loss term leads to improved performance on both state- and image-based tasks from the DeepMind-Control suite.

Viaarxiv icon

Chat with the Environment: Interactive Multimodal Perception using Large Language Models

Mar 14, 2023
Xufeng Zhao, Mengdi Li, Cornelius Weber, Muhammad Burhan Hafez, Stefan Wermter

Figure 1 for Chat with the Environment: Interactive Multimodal Perception using Large Language Models
Figure 2 for Chat with the Environment: Interactive Multimodal Perception using Large Language Models
Figure 3 for Chat with the Environment: Interactive Multimodal Perception using Large Language Models
Figure 4 for Chat with the Environment: Interactive Multimodal Perception using Large Language Models

Programming robot behaviour in a complex world faces challenges on multiple levels, from dextrous low-level skills to high-level planning and reasoning. Recent pre-trained Large Language Models (LLMs) have shown remarkable reasoning ability in zero-shot robotic planning. However, it remains challenging to ground LLMs in multimodal sensory input and continuous action output, while enabling a robot to interact with its environment and acquire novel information as its policies unfold. We develop a robot interaction scenario with a partially observable state, which necessitates a robot to decide on a range of epistemic actions in order to sample sensory information among multiple modalities, before being able to execute the task correctly. An interactive perception framework is therefore proposed with an LLM as its backbone, whose ability is exploited to instruct epistemic actions and to reason over the resulting multimodal sensations (vision, sound, haptics, proprioception), as well as to plan an entire task execution based on the interactively acquired information. Our study demonstrates that LLMs can provide high-level planning and reasoning skills and control interactive robot behaviour in a multimodal environment, while multimodal modules with the context of the environmental state help ground the LLMs and extend their processing ability.

* See website at https://xf-zhao.github.io/projects/Matcha/ 
Viaarxiv icon

Learning Bidirectional Action-Language Translation with Limited Supervision and Incongruent Extra Input

Jan 09, 2023
Ozan Özdemir, Matthias Kerzel, Cornelius Weber, Jae Hee Lee, Muhammad Burhan Hafez, Patrick Bruns, Stefan Wermter

Figure 1 for Learning Bidirectional Action-Language Translation with Limited Supervision and Incongruent Extra Input
Figure 2 for Learning Bidirectional Action-Language Translation with Limited Supervision and Incongruent Extra Input
Figure 3 for Learning Bidirectional Action-Language Translation with Limited Supervision and Incongruent Extra Input
Figure 4 for Learning Bidirectional Action-Language Translation with Limited Supervision and Incongruent Extra Input

Human infant learning happens during exploration of the environment, by interaction with objects, and by listening to and repeating utterances casually, which is analogous to unsupervised learning. Only occasionally, a learning infant would receive a matching verbal description of an action it is committing, which is similar to supervised learning. Such a learning mechanism can be mimicked with deep learning. We model this weakly supervised learning paradigm using our Paired Gated Autoencoders (PGAE) model, which combines an action and a language autoencoder. After observing a performance drop when reducing the proportion of supervised training, we introduce the Paired Transformed Autoencoders (PTAE) model, using Transformer-based crossmodal attention. PTAE achieves significantly higher accuracy in language-to-action and action-to-language translations, particularly in realistic but difficult cases when only few supervised training samples are available. We also test whether the trained model behaves realistically with conflicting multimodal input. In accordance with the concept of incongruence in psychology, conflict deteriorates the model output. Conflicting action input has a more severe impact than conflicting language input, and more conflicting features lead to larger interference. PTAE can be trained on mostly unlabelled data where labeled data is scarce, and it behaves plausibly when tested with incongruent input.

Viaarxiv icon

Impact Makes a Sound and Sound Makes an Impact: Sound Guides Representations and Explorations

Aug 04, 2022
Xufeng Zhao, Cornelius Weber, Muhammad Burhan Hafez, Stefan Wermter

Figure 1 for Impact Makes a Sound and Sound Makes an Impact: Sound Guides Representations and Explorations
Figure 2 for Impact Makes a Sound and Sound Makes an Impact: Sound Guides Representations and Explorations
Figure 3 for Impact Makes a Sound and Sound Makes an Impact: Sound Guides Representations and Explorations
Figure 4 for Impact Makes a Sound and Sound Makes an Impact: Sound Guides Representations and Explorations

Sound is one of the most informative and abundant modalities in the real world while being robust to sense without contacts by small and cheap sensors that can be placed on mobile devices. Although deep learning is capable of extracting information from multiple sensory inputs, there has been little use of sound for the control and learning of robotic actions. For unsupervised reinforcement learning, an agent is expected to actively collect experiences and jointly learn representations and policies in a self-supervised way. We build realistic robotic manipulation scenarios with physics-based sound simulation and propose the Intrinsic Sound Curiosity Module (ISCM). The ISCM provides feedback to a reinforcement learner to learn robust representations and to reward a more efficient exploration behavior. We perform experiments with sound enabled during pre-training and disabled during adaptation, and show that representations learned by ISCM outperform the ones by vision-only baselines and pre-trained policies can accelerate the learning process when applied to downstream tasks.

* Accepted at IROS 2022 
Viaarxiv icon

Behavior Self-Organization Supports Task Inference for Continual Robot Learning

Jul 09, 2021
Muhammad Burhan Hafez, Stefan Wermter

Figure 1 for Behavior Self-Organization Supports Task Inference for Continual Robot Learning
Figure 2 for Behavior Self-Organization Supports Task Inference for Continual Robot Learning
Figure 3 for Behavior Self-Organization Supports Task Inference for Continual Robot Learning
Figure 4 for Behavior Self-Organization Supports Task Inference for Continual Robot Learning

Recent advances in robot learning have enabled robots to become increasingly better at mastering a predefined set of tasks. On the other hand, as humans, we have the ability to learn a growing set of tasks over our lifetime. Continual robot learning is an emerging research direction with the goal of endowing robots with this ability. In order to learn new tasks over time, the robot first needs to infer the task at hand. Task inference, however, has received little attention in the multi-task learning literature. In this paper, we propose a novel approach to continual learning of robotic control tasks. Our approach performs unsupervised learning of behavior embeddings by incrementally self-organizing demonstrated behaviors. Task inference is made by finding the nearest behavior embedding to a demonstrated behavior, which is used together with the environment state as input to a multi-task policy trained with reinforcement learning to optimize performance over tasks. Unlike previous approaches, our approach makes no assumptions about task distribution and requires no task exploration to infer tasks. We evaluate our approach in experiments with concurrently and sequentially presented tasks and show that it outperforms other multi-task learning approaches in terms of generalization performance and convergence speed, particularly in the continual learning setting.

* Accepted at IROS 2021 
Viaarxiv icon

Improving Model-Based Reinforcement Learning with Internal State Representations through Self-Supervision

Feb 10, 2021
Julien Scholz, Cornelius Weber, Muhammad Burhan Hafez, Stefan Wermter

Figure 1 for Improving Model-Based Reinforcement Learning with Internal State Representations through Self-Supervision
Figure 2 for Improving Model-Based Reinforcement Learning with Internal State Representations through Self-Supervision
Figure 3 for Improving Model-Based Reinforcement Learning with Internal State Representations through Self-Supervision
Figure 4 for Improving Model-Based Reinforcement Learning with Internal State Representations through Self-Supervision

Using a model of the environment, reinforcement learning agents can plan their future moves and achieve superhuman performance in board games like Chess, Shogi, and Go, while remaining relatively sample-efficient. As demonstrated by the MuZero Algorithm, the environment model can even be learned dynamically, generalizing the agent to many more tasks while at the same time achieving state-of-the-art performance. Notably, MuZero uses internal state representations derived from real environment states for its predictions. In this paper, we bind the model's predicted internal state representation to the environment state via two additional terms: a reconstruction model loss and a simpler consistency loss, both of which work independently and unsupervised, acting as constraints to stabilize the learning process. Our experiments show that this new integration of reconstruction model loss and simpler consistency loss provide a significant performance increase in OpenAI Gym environments. Our modifications also enable self-supervised pretraining for MuZero, so the algorithm can learn about environment dynamics before a goal is made available.

Viaarxiv icon

Improving Robot Dual-System Motor Learning with Intrinsically Motivated Meta-Control and Latent-Space Experience Imagination

Apr 19, 2020
Muhammad Burhan Hafez, Cornelius Weber, Matthias Kerzel, Stefan Wermter

Figure 1 for Improving Robot Dual-System Motor Learning with Intrinsically Motivated Meta-Control and Latent-Space Experience Imagination
Figure 2 for Improving Robot Dual-System Motor Learning with Intrinsically Motivated Meta-Control and Latent-Space Experience Imagination
Figure 3 for Improving Robot Dual-System Motor Learning with Intrinsically Motivated Meta-Control and Latent-Space Experience Imagination
Figure 4 for Improving Robot Dual-System Motor Learning with Intrinsically Motivated Meta-Control and Latent-Space Experience Imagination

Combining model-based and model-free learning systems has been shown to improve the sample efficiency of learning to perform complex robotic tasks. However, dual-system approaches fail to consider the reliability of the learned model when it is applied to make multiple-step predictions, resulting in a compounding of prediction errors and performance degradation. In this paper, we present a novel dual-system motor learning approach where a meta-controller arbitrates online between model-based and model-free decisions based on an estimate of the local reliability of the learned model. The reliability estimate is used in computing an intrinsic feedback signal, encouraging actions that lead to data that improves the model. Our approach also integrates arbitration with imagination where a learned latent-space model generates imagined experiences, based on its local reliability, to be used as additional training data. We evaluate our approach against baseline and state-of-the-art methods on learning vision-based robotic grasping in simulation and real world. The results show that our approach outperforms the compared methods and learns near-optimal grasping policies in dense- and sparse-reward environments.

Viaarxiv icon

Efficient Intrinsically Motivated Robotic Grasping with Learning-Adaptive Imagination in Latent Space

Oct 10, 2019
Muhammad Burhan Hafez, Cornelius Weber, Matthias Kerzel, Stefan Wermter

Figure 1 for Efficient Intrinsically Motivated Robotic Grasping with Learning-Adaptive Imagination in Latent Space
Figure 2 for Efficient Intrinsically Motivated Robotic Grasping with Learning-Adaptive Imagination in Latent Space
Figure 3 for Efficient Intrinsically Motivated Robotic Grasping with Learning-Adaptive Imagination in Latent Space
Figure 4 for Efficient Intrinsically Motivated Robotic Grasping with Learning-Adaptive Imagination in Latent Space

Combining model-based and model-free deep reinforcement learning has shown great promise for improving sample efficiency on complex control tasks while still retaining high performance. Incorporating imagination is a recent effort in this direction inspired by human mental simulation of motor behavior. We propose a learning-adaptive imagination approach which, unlike previous approaches, takes into account the reliability of the learned dynamics model used for imagining the future. Our approach learns an ensemble of disjoint local dynamics models in latent space and derives an intrinsic reward based on learning progress, motivating the controller to take actions leading to data that improves the models. The learned models are used to generate imagined experiences, augmenting the training set of real experiences. We evaluate our approach on learning vision-based robotic grasping and show that it significantly improves sample efficiency and achieves near-optimal performance in a sparse reward environment.

* In: Proceedings of the Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics (ICDL-EpiRob), Oslo, Norway, Aug. 19-22, 2019 
Viaarxiv icon