Alert button
Picture for Sergey Levine

Sergey Levine

Alert button

More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch

Jul 26, 2018
Roberto Calandra, Andrew Owens, Dinesh Jayaraman, Justin Lin, Wenzhen Yuan, Jitendra Malik, Edward H. Adelson, Sergey Levine

Figure 1 for More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch
Figure 2 for More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch
Figure 3 for More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch
Figure 4 for More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch
Viaarxiv icon

Visual Reinforcement Learning with Imagined Goals

Jul 12, 2018
Ashvin Nair, Vitchyr Pong, Murtaza Dalal, Shikhar Bahl, Steven Lin, Sergey Levine

Figure 1 for Visual Reinforcement Learning with Imagined Goals
Figure 2 for Visual Reinforcement Learning with Imagined Goals
Figure 3 for Visual Reinforcement Learning with Imagined Goals
Figure 4 for Visual Reinforcement Learning with Imagined Goals
Viaarxiv icon

Automatically Composing Representation Transformations as a Means for Generalization

Jul 12, 2018
Michael B. Chang, Abhishek Gupta, Sergey Levine, Thomas L. Griffiths

Figure 1 for Automatically Composing Representation Transformations as a Means for Generalization
Figure 2 for Automatically Composing Representation Transformations as a Means for Generalization
Figure 3 for Automatically Composing Representation Transformations as a Means for Generalization
Figure 4 for Automatically Composing Representation Transformations as a Means for Generalization
Viaarxiv icon

QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation

Jul 02, 2018
Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, Sergey Levine

Figure 1 for QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation
Figure 2 for QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation
Figure 3 for QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation
Figure 4 for QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation
Viaarxiv icon

Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations

Jun 26, 2018
Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, Giulia Vezzani, John Schulman, Emanuel Todorov, Sergey Levine

Figure 1 for Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations
Figure 2 for Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations
Figure 3 for Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations
Figure 4 for Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations
Viaarxiv icon

Learning Instance Segmentation by Interaction

Jun 21, 2018
Deepak Pathak, Yide Shentu, Dian Chen, Pulkit Agrawal, Trevor Darrell, Sergey Levine, Jitendra Malik

Figure 1 for Learning Instance Segmentation by Interaction
Figure 2 for Learning Instance Segmentation by Interaction
Figure 3 for Learning Instance Segmentation by Interaction
Viaarxiv icon

Imitation from Observation: Learning to Imitate Behaviors from Raw Video via Context Translation

Jun 18, 2018
YuXuan Liu, Abhishek Gupta, Pieter Abbeel, Sergey Levine

Figure 1 for Imitation from Observation: Learning to Imitate Behaviors from Raw Video via Context Translation
Figure 2 for Imitation from Observation: Learning to Imitate Behaviors from Raw Video via Context Translation
Figure 3 for Imitation from Observation: Learning to Imitate Behaviors from Raw Video via Context Translation
Figure 4 for Imitation from Observation: Learning to Imitate Behaviors from Raw Video via Context Translation
Viaarxiv icon

Unsupervised Meta-Learning for Reinforcement Learning

Jun 12, 2018
Abhishek Gupta, Benjamin Eysenbach, Chelsea Finn, Sergey Levine

Figure 1 for Unsupervised Meta-Learning for Reinforcement Learning
Figure 2 for Unsupervised Meta-Learning for Reinforcement Learning
Figure 3 for Unsupervised Meta-Learning for Reinforcement Learning
Figure 4 for Unsupervised Meta-Learning for Reinforcement Learning
Viaarxiv icon

Probabilistic Model-Agnostic Meta-Learning

Jun 07, 2018
Chelsea Finn, Kelvin Xu, Sergey Levine

Figure 1 for Probabilistic Model-Agnostic Meta-Learning
Figure 2 for Probabilistic Model-Agnostic Meta-Learning
Figure 3 for Probabilistic Model-Agnostic Meta-Learning
Figure 4 for Probabilistic Model-Agnostic Meta-Learning
Viaarxiv icon

Self-Consistent Trajectory Autoencoder: Hierarchical Reinforcement Learning with Trajectory Embeddings

Jun 07, 2018
John D. Co-Reyes, YuXuan Liu, Abhishek Gupta, Benjamin Eysenbach, Pieter Abbeel, Sergey Levine

Figure 1 for Self-Consistent Trajectory Autoencoder: Hierarchical Reinforcement Learning with Trajectory Embeddings
Figure 2 for Self-Consistent Trajectory Autoencoder: Hierarchical Reinforcement Learning with Trajectory Embeddings
Figure 3 for Self-Consistent Trajectory Autoencoder: Hierarchical Reinforcement Learning with Trajectory Embeddings
Figure 4 for Self-Consistent Trajectory Autoencoder: Hierarchical Reinforcement Learning with Trajectory Embeddings
Viaarxiv icon