Alert button
Picture for Oleg Arenz

Oleg Arenz

Alert button

LS-IQ: Implicit Reward Regularization for Inverse Reinforcement Learning

Mar 01, 2023
Firas Al-Hafez, Davide Tateo, Oleg Arenz, Guoping Zhao, Jan Peters

Figure 1 for LS-IQ: Implicit Reward Regularization for Inverse Reinforcement Learning
Figure 2 for LS-IQ: Implicit Reward Regularization for Inverse Reinforcement Learning
Figure 3 for LS-IQ: Implicit Reward Regularization for Inverse Reinforcement Learning
Figure 4 for LS-IQ: Implicit Reward Regularization for Inverse Reinforcement Learning

Recent methods for imitation learning directly learn a $Q$-function using an implicit reward formulation rather than an explicit reward function. However, these methods generally require implicit reward regularization to improve stability and often mistreat absorbing states. Previous works show that a squared norm regularization on the implicit reward function is effective, but do not provide a theoretical analysis of the resulting properties of the algorithms. In this work, we show that using this regularizer under a mixture distribution of the policy and the expert provides a particularly illuminating perspective: the original objective can be understood as squared Bellman error minimization, and the corresponding optimization problem minimizes a bounded $\chi^2$-Divergence between the expert and the mixture distribution. This perspective allows us to address instabilities and properly treat absorbing states. We show that our method, Least Squares Inverse Q-Learning (LS-IQ), outperforms state-of-the-art algorithms, particularly in environments with absorbing states. Finally, we propose to use an inverse dynamics model to learn from observations only. Using this approach, we retain performance in settings where no expert actions are available.

Viaarxiv icon

A Unified Perspective on Natural Gradient Variational Inference with Gaussian Mixture Models

Sep 23, 2022
Oleg Arenz, Philipp Dahlinger, Zihan Ye, Michael Volpp, Gerhard Neumann

Figure 1 for A Unified Perspective on Natural Gradient Variational Inference with Gaussian Mixture Models
Figure 2 for A Unified Perspective on Natural Gradient Variational Inference with Gaussian Mixture Models
Figure 3 for A Unified Perspective on Natural Gradient Variational Inference with Gaussian Mixture Models
Figure 4 for A Unified Perspective on Natural Gradient Variational Inference with Gaussian Mixture Models

Variational inference with Gaussian mixture models (GMMs) enables learning of highly-tractable yet multi-modal approximations of intractable target distributions. GMMs are particular relevant for problem settings with up to a few hundred dimensions, for example in robotics, for modelling distributions over trajectories or joint distributions. This work focuses on two very effective methods for GMM-based variational inference that both employ independent natural gradient updates for the individual components and the categorical distribution of the weights. We show for the first time, that their derived updates are equivalent, although their practical implementations and theoretical guarantees differ. We identify several design choices that distinguish both approaches, namely with respect to sample selection, natural gradient estimation, stepsize adaptation, and whether trust regions are enforced or the number of components adapted. We perform extensive ablations on these design choices and show that they strongly affect the efficiency of the optimization and the variability of the learned distribution. Based on our insights, we propose a novel instantiation of our generalized framework, that combines first-order natural gradient estimates with trust-regions and component adaption, and significantly outperforms both previous methods in all our experiments.

Viaarxiv icon

Self-supervised Sequential Information Bottleneck for Robust Exploration in Deep Reinforcement Learning

Sep 12, 2022
Bang You, Jingming Xie, Youping Chen, Jan Peters, Oleg Arenz

Figure 1 for Self-supervised Sequential Information Bottleneck for Robust Exploration in Deep Reinforcement Learning
Figure 2 for Self-supervised Sequential Information Bottleneck for Robust Exploration in Deep Reinforcement Learning
Figure 3 for Self-supervised Sequential Information Bottleneck for Robust Exploration in Deep Reinforcement Learning
Figure 4 for Self-supervised Sequential Information Bottleneck for Robust Exploration in Deep Reinforcement Learning

Effective exploration is critical for reinforcement learning agents in environments with sparse rewards or high-dimensional state-action spaces. Recent works based on state-visitation counts, curiosity and entropy-maximization generate intrinsic reward signals to motivate the agent to visit novel states for exploration. However, the agent can get distracted by perturbations to sensor inputs that contain novel but task-irrelevant information, e.g. due to sensor noise or changing background. In this work, we introduce the sequential information bottleneck objective for learning compressed and temporally coherent representations by modelling and compressing sequential predictive information in time-series observations. For efficient exploration in noisy environments, we further construct intrinsic rewards that capture task-relevant state novelty based on the learned representations. We derive a variational upper bound of our sequential information bottleneck objective for practical optimization and provide an information-theoretic interpretation of the derived upper bound. Our experiments on a set of challenging image-based simulated control tasks show that our method achieves better sample efficiency, and robustness to both white noise and natural video backgrounds compared to state-of-art methods based on curiosity, entropy maximization and information-gain.

* 14 pages 
Viaarxiv icon

Integrating Contrastive Learning with Dynamic Models for Reinforcement Learning from Images

Mar 02, 2022
Bang You, Oleg Arenz, Youping Chen, Jan Peters

Figure 1 for Integrating Contrastive Learning with Dynamic Models for Reinforcement Learning from Images
Figure 2 for Integrating Contrastive Learning with Dynamic Models for Reinforcement Learning from Images
Figure 3 for Integrating Contrastive Learning with Dynamic Models for Reinforcement Learning from Images
Figure 4 for Integrating Contrastive Learning with Dynamic Models for Reinforcement Learning from Images

Recent methods for reinforcement learning from images use auxiliary tasks to learn image features that are used by the agent's policy or Q-function. In particular, methods based on contrastive learning that induce linearity of the latent dynamics or invariance to data augmentation have been shown to greatly improve the sample efficiency of the reinforcement learning algorithm and the generalizability of the learned embedding. We further argue, that explicitly improving Markovianity of the learned embedding is desirable and propose a self-supervised representation learning method which integrates contrastive learning with dynamic models to synergistically combine these three objectives: (1) We maximize the InfoNCE bound on the mutual information between the state- and action-embedding and the embedding of the next state to induce a linearly predictive embedding without explicitly learning a linear transition model, (2) we further improve Markovianity of the learned embedding by explicitly learning a non-linear transition model using regression, and (3) we maximize the mutual information between the two nonlinear predictions of the next embeddings based on the current action and two independent augmentations of the current state, which naturally induces transformation invariance not only for the state embedding, but also for the nonlinear transition model. Experimental evaluation on the Deepmind control suite shows that our proposed method achieves higher sample efficiency and better generalization than state-of-art methods based on contrastive learning or reconstruction.

* Neurocomputing 476(2022)102-114  
* 28 pages, 11 figures, 5 tables 
Viaarxiv icon

Assisted Teleoperation in Changing Environments with a Mixture of Virtual Guides

Aug 12, 2020
Marco Ewerton, Oleg Arenz, Jan Peters

Figure 1 for Assisted Teleoperation in Changing Environments with a Mixture of Virtual Guides
Figure 2 for Assisted Teleoperation in Changing Environments with a Mixture of Virtual Guides
Figure 3 for Assisted Teleoperation in Changing Environments with a Mixture of Virtual Guides
Figure 4 for Assisted Teleoperation in Changing Environments with a Mixture of Virtual Guides

Haptic guidance is a powerful technique to combine the strengths of humans and autonomous systems for teleoperation. The autonomous system can provide haptic cues to enable the operator to perform precise movements; the operator can interfere with the plan of the autonomous system leveraging his/her superior cognitive capabilities. However, providing haptic cues such that the individual strengths are not impaired is challenging because low forces provide little guidance, whereas strong forces can hinder the operator in realizing his/her plan. Based on variational inference, we learn a Gaussian mixture model (GMM) over trajectories to accomplish a given task. The learned GMM is used to construct a potential field which determines the haptic cues. The potential field smoothly changes during teleoperation based on our updated belief over the plans and their respective phases. Furthermore, new plans are learned online when the operator does not follow any of the proposed plans, or after changes in the environment. User studies confirm that our framework helps users perform teleoperation tasks more accurately than without haptic cues and, in some cases, faster. Moreover, we demonstrate the use of our framework to help a subject teleoperate a 7 DoF manipulator in a pick-and-place task.

* Advanced Robotics, 2020  
* 19 pages, 9 figures 
Viaarxiv icon

Non-Adversarial Imitation Learning and its Connections to Adversarial Methods

Aug 08, 2020
Oleg Arenz, Gerhard Neumann

Many modern methods for imitation learning and inverse reinforcement learning, such as GAIL or AIRL, are based on an adversarial formulation. These methods apply GANs to match the expert's distribution over states and actions with the implicit state-action distribution induced by the agent's policy. However, by framing imitation learning as a saddle point problem, adversarial methods can suffer from unstable optimization, and convergence can only be shown for small policy updates. We address these problems by proposing a framework for non-adversarial imitation learning. The resulting algorithms are similar to their adversarial counterparts and, thus, provide insights for adversarial imitation learning methods. Most notably, we show that AIRL is an instance of our non-adversarial formulation, which enables us to greatly simplify its derivations and obtain stronger convergence guarantees. We also show that our non-adversarial formulation can be used to derive novel algorithms by presenting a method for offline imitation learning that is inspired by the recent ValueDice algorithm, but does not rely on small policy updates for convergence. In our simulated robot experiments, our offline method for non-adversarial imitation learning seems to perform best when using many updates for policy and discriminator at each iteration and outperforms behavioral cloning and ValueDice.

Viaarxiv icon

Deep Adversarial Reinforcement Learning for Object Disentangling

Mar 08, 2020
Melvin Laux, Oleg Arenz, Jan Peters, Joni Pajarinen

Figure 1 for Deep Adversarial Reinforcement Learning for Object Disentangling
Figure 2 for Deep Adversarial Reinforcement Learning for Object Disentangling
Figure 3 for Deep Adversarial Reinforcement Learning for Object Disentangling
Figure 4 for Deep Adversarial Reinforcement Learning for Object Disentangling

Deep learning in combination with improved training techniques and high computational power has led to recent advances in the field of reinforcement learning (RL) and to successful robotic RL applications such as in-hand manipulation. However, most robotic RL relies on a well known initial state distribution. In real-world tasks, this information is however often not available. For example, when disentangling waste objects the actual position of the robot w.r.t.\ the objects may not match the positions the RL policy was trained for. To solve this problem, we present a novel adversarial reinforcement learning (ARL) framework. The ARL framework utilizes an adversary, which is trained to steer the original agent, the protagonist, to challenging states. We train the protagonist and the adversary jointly to allow them to adapt to the changing policy of their opponent. We show that our method can generalize from training to test scenarios by training an end-to-end system for robot control to solve a challenging object disentangling task. Experiments with a KUKA LBR+ 7-DOF robot arm show that our approach outperforms the baseline method in disentangling when starting from different initial states than provided during training.

* 8 pages, submitted to IROS 2020 
Viaarxiv icon

Probabilistic approach to physical object disentangling

Feb 26, 2020
Joni Pajarinen, Oleg Arenz, Jan Peters, Gerhard Neumann

Figure 1 for Probabilistic approach to physical object disentangling
Figure 2 for Probabilistic approach to physical object disentangling
Figure 3 for Probabilistic approach to physical object disentangling
Figure 4 for Probabilistic approach to physical object disentangling

Physically disentangling entangled objects from each other is a problem encountered in waste segregation or in any task that requires disassembly of structures. Often there are no object models, and, especially with cluttered irregularly shaped objects, the robot can not create a model of the scene due to occlusion. One of our key insights is that based on previous sensory input we are only interested in moving an object out of the disentanglement around obstacles. That is, we only need to know where the robot can successfully move in order to plan the disentangling. Due to the uncertainty we integrate information about blocked movements into a probability map. The map defines the probability of the robot successfully moving to a specific configuration. Using as cost the failure probability of a sequence of movements we can then plan and execute disentangling iteratively. Since our approach circumvents only obstacles that it already knows about new movements will yield information about unknown obstacles that block movement until the robot has learned to circumvent all obstacles and disentangling succeeds. In the experiments, we use a special probabilistic version of the Rapidly exploring Random Tree (RRT) algorithm for planning and demonstrate successful disentanglement of objects both in 2-D and 3-D simulation, and, on a KUKA LBR 7-DOF robot. Moreover, our approach outperforms baseline methods.

Viaarxiv icon

Expected Information Maximization: Using the I-Projection for Mixture Density Estimation

Jan 23, 2020
Philipp Becker, Oleg Arenz, Gerhard Neumann

Figure 1 for Expected Information Maximization: Using the I-Projection for Mixture Density Estimation
Figure 2 for Expected Information Maximization: Using the I-Projection for Mixture Density Estimation
Figure 3 for Expected Information Maximization: Using the I-Projection for Mixture Density Estimation
Figure 4 for Expected Information Maximization: Using the I-Projection for Mixture Density Estimation

Modelling highly multi-modal data is a challenging problem in machine learning. Most algorithms are based on maximizing the likelihood, which corresponds to the M(oment)-projection of the data distribution to the model distribution. The M-projection forces the model to average over modes it cannot represent. In contrast, the I(information)-projection ignores such modes in the data and concentrates on the modes the model can represent. Such behavior is appealing whenever we deal with highly multi-modal data where modelling single modes correctly is more important than covering all the modes. Despite this advantage, the I-projection is rarely used in practice due to the lack of algorithms that can efficiently optimize it based on data. In this work, we present a new algorithm called Expected Information Maximization (EIM) for computing the I-projection solely based on samples for general latent variable models, where we focus on Gaussian mixtures models and Gaussian mixtures of experts. Our approach applies a variational upper bound to the I-projection objective which decomposes the original objective into single objectives for each mixture component as well as for the coefficients, allowing an efficient optimization. Similar to GANs, our approach employs discriminators but uses a more stable optimization procedure, using a tight upper bound. We show that our algorithm is much more effective in computing the I-projection than recent GAN approaches and we illustrate the effectiveness of our approach for modelling multi-modal behavior on two pedestrian and traffic prediction datasets.

Viaarxiv icon

Trust-Region Variational Inference with Gaussian Mixture Models

Jul 10, 2019
Oleg Arenz, Mingjun Zhong, Gerhard Neumann

Figure 1 for Trust-Region Variational Inference with Gaussian Mixture Models
Figure 2 for Trust-Region Variational Inference with Gaussian Mixture Models
Figure 3 for Trust-Region Variational Inference with Gaussian Mixture Models
Figure 4 for Trust-Region Variational Inference with Gaussian Mixture Models

Many methods for machine learning rely on approximate inference from intractable probability distributions. Variational inference approximates such distributions by tractable models that can be subsequently used for approximate inference. Learning sufficiently accurate approximations requires a rich model family and careful exploration of the relevant modes of the target distribution. We propose a method for learning accurate GMM approximations of intractable probability distributions based on insights from policy search by establishing information-geometric trust regions for principled exploration. For efficient improvement of the GMM approximation, we derive a lower bound on the corresponding optimization objective enabling us to update the components independently. The use of the lower bound ensures convergence to a local optimum of the original objective. The number of components is adapted online by adding new components in promising regions and by deleting components with negligible weight. We demonstrate on several domains that we can learn approximations of complex, multi-modal distributions with a quality that is unmet by previous variational inference methods, and that the GMM approximation can be used for drawing samples that are on par with samples created by state-of-the-art MCMC samplers while requiring up to three orders of magnitude less computational resources.

Viaarxiv icon