Alert button
Picture for Ashvin Nair

Ashvin Nair

Alert button

Overcoming Exploration in Reinforcement Learning with Demonstrations

Feb 25, 2018
Ashvin Nair, Bob McGrew, Marcin Andrychowicz, Wojciech Zaremba, Pieter Abbeel

Figure 1 for Overcoming Exploration in Reinforcement Learning with Demonstrations
Figure 2 for Overcoming Exploration in Reinforcement Learning with Demonstrations
Figure 3 for Overcoming Exploration in Reinforcement Learning with Demonstrations
Figure 4 for Overcoming Exploration in Reinforcement Learning with Demonstrations

Exploration in environments with sparse rewards has been a persistent problem in reinforcement learning (RL). Many tasks are natural to specify with a sparse reward, and manually shaping a reward function can result in suboptimal performance. However, finding a non-zero reward is exponentially more difficult with increasing task horizon or action dimensionality. This puts many real-world tasks out of practical reach of RL methods. In this work, we use demonstrations to overcome the exploration problem and successfully learn to perform long-horizon, multi-step robotics tasks with continuous control such as stacking blocks with a robot arm. Our method, which builds on top of Deep Deterministic Policy Gradients and Hindsight Experience Replay, provides an order of magnitude of speedup over RL on simulated robotics tasks. It is simple to implement and makes only the additional assumption that we can collect a small set of demonstrations. Furthermore, our method is able to solve tasks not solvable by either RL or behavior cloning alone, and often ends up outperforming the demonstrator policy.

* 8 pages, ICRA 2018 
Viaarxiv icon

Combining Self-Supervised Learning and Imitation for Vision-Based Rope Manipulation

Mar 06, 2017
Ashvin Nair, Dian Chen, Pulkit Agrawal, Phillip Isola, Pieter Abbeel, Jitendra Malik, Sergey Levine

Figure 1 for Combining Self-Supervised Learning and Imitation for Vision-Based Rope Manipulation
Figure 2 for Combining Self-Supervised Learning and Imitation for Vision-Based Rope Manipulation
Figure 3 for Combining Self-Supervised Learning and Imitation for Vision-Based Rope Manipulation
Figure 4 for Combining Self-Supervised Learning and Imitation for Vision-Based Rope Manipulation

Manipulation of deformable objects, such as ropes and cloth, is an important but challenging problem in robotics. We present a learning-based system where a robot takes as input a sequence of images of a human manipulating a rope from an initial to goal configuration, and outputs a sequence of actions that can reproduce the human demonstration, using only monocular images as input. To perform this task, the robot learns a pixel-level inverse dynamics model of rope manipulation directly from images in a self-supervised manner, using about 60K interactions with the rope collected autonomously by the robot. The human demonstration provides a high-level plan of what to do and the low-level inverse model is used to execute the plan. We show that by combining the high and low-level plans, the robot can successfully manipulate a rope into a variety of target shapes using only a sequence of human-provided images for direction.

* 8 pages, accepted to International Conference on Robotics and Automation (ICRA) 2017 
Viaarxiv icon

Learning to Poke by Poking: Experiential Learning of Intuitive Physics

Feb 15, 2017
Pulkit Agrawal, Ashvin Nair, Pieter Abbeel, Jitendra Malik, Sergey Levine

Figure 1 for Learning to Poke by Poking: Experiential Learning of Intuitive Physics
Figure 2 for Learning to Poke by Poking: Experiential Learning of Intuitive Physics
Figure 3 for Learning to Poke by Poking: Experiential Learning of Intuitive Physics
Figure 4 for Learning to Poke by Poking: Experiential Learning of Intuitive Physics

We investigate an experiential learning paradigm for acquiring an internal model of intuitive physics. Our model is evaluated on a real-world robotic manipulation task that requires displacing objects to target locations by poking. The robot gathered over 400 hours of experience by executing more than 100K pokes on different objects. We propose a novel approach based on deep neural networks for modeling the dynamics of robot's interactions directly from images, by jointly estimating forward and inverse models of dynamics. The inverse model objective provides supervision to construct informative visual features, which the forward model can then predict and in turn regularize the feature space for the inverse model. The interplay between these two objectives creates useful, accurate models that can then be used for multi-step decision making. This formulation has the additional benefit that it is possible to learn forward models in an abstract feature space and thus alleviate the need of predicting pixels. Our experiments show that this joint modeling approach outperforms alternative methods.

* NIPS 2016  
Viaarxiv icon