Alert button
Picture for Sergey Levine

Sergey Levine

Alert button

Robustness to Out-of-Distribution Inputs via Task-Aware Generative Uncertainty

Add code
Bookmark button
Alert button
Dec 27, 2018
Rowan McAllister, Gregory Kahn, Jeff Clune, Sergey Levine

Figure 1 for Robustness to Out-of-Distribution Inputs via Task-Aware Generative Uncertainty
Figure 2 for Robustness to Out-of-Distribution Inputs via Task-Aware Generative Uncertainty
Figure 3 for Robustness to Out-of-Distribution Inputs via Task-Aware Generative Uncertainty
Figure 4 for Robustness to Out-of-Distribution Inputs via Task-Aware Generative Uncertainty
Viaarxiv icon

Residual Reinforcement Learning for Robot Control

Add code
Bookmark button
Alert button
Dec 18, 2018
Tobias Johannink, Shikhar Bahl, Ashvin Nair, Jianlan Luo, Avinash Kumar, Matthias Loskyll, Juan Aparicio Ojea, Eugen Solowjow, Sergey Levine

Figure 1 for Residual Reinforcement Learning for Robot Control
Figure 2 for Residual Reinforcement Learning for Robot Control
Figure 3 for Residual Reinforcement Learning for Robot Control
Figure 4 for Residual Reinforcement Learning for Robot Control
Viaarxiv icon

Visual Reinforcement Learning with Imagined Goals

Add code
Bookmark button
Alert button
Dec 04, 2018
Ashvin Nair, Vitchyr Pong, Murtaza Dalal, Shikhar Bahl, Steven Lin, Sergey Levine

Figure 1 for Visual Reinforcement Learning with Imagined Goals
Figure 2 for Visual Reinforcement Learning with Imagined Goals
Figure 3 for Visual Reinforcement Learning with Imagined Goals
Figure 4 for Visual Reinforcement Learning with Imagined Goals
Viaarxiv icon

Visual Memory for Robust Path Following

Add code
Bookmark button
Alert button
Dec 03, 2018
Ashish Kumar, Saurabh Gupta, David Fouhey, Sergey Levine, Jitendra Malik

Figure 1 for Visual Memory for Robust Path Following
Figure 2 for Visual Memory for Robust Path Following
Figure 3 for Visual Memory for Robust Path Following
Figure 4 for Visual Memory for Robust Path Following
Viaarxiv icon

Visual Foresight: Model-Based Deep Reinforcement Learning for Vision-Based Robotic Control

Add code
Bookmark button
Alert button
Dec 03, 2018
Frederik Ebert, Chelsea Finn, Sudeep Dasari, Annie Xie, Alex Lee, Sergey Levine

Figure 1 for Visual Foresight: Model-Based Deep Reinforcement Learning for Vision-Based Robotic Control
Figure 2 for Visual Foresight: Model-Based Deep Reinforcement Learning for Vision-Based Robotic Control
Figure 3 for Visual Foresight: Model-Based Deep Reinforcement Learning for Vision-Based Robotic Control
Figure 4 for Visual Foresight: Model-Based Deep Reinforcement Learning for Vision-Based Robotic Control
Viaarxiv icon

QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation

Add code
Bookmark button
Alert button
Nov 28, 2018
Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, Sergey Levine

Figure 1 for QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation
Figure 2 for QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation
Figure 3 for QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation
Figure 4 for QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation
Viaarxiv icon

Guiding Policies with Language via Meta-Learning

Add code
Bookmark button
Alert button
Nov 19, 2018
John D. Co-Reyes, Abhishek Gupta, Suvansh Sanjeev, Nick Altieri, John DeNero, Pieter Abbeel, Sergey Levine

Figure 1 for Guiding Policies with Language via Meta-Learning
Figure 2 for Guiding Policies with Language via Meta-Learning
Figure 3 for Guiding Policies with Language via Meta-Learning
Figure 4 for Guiding Policies with Language via Meta-Learning
Viaarxiv icon

Grasp2Vec: Learning Object Representations from Self-Supervised Grasping

Add code
Bookmark button
Alert button
Nov 19, 2018
Eric Jang, Coline Devin, Vincent Vanhoucke, Sergey Levine

Figure 1 for Grasp2Vec: Learning Object Representations from Self-Supervised Grasping
Figure 2 for Grasp2Vec: Learning Object Representations from Self-Supervised Grasping
Figure 3 for Grasp2Vec: Learning Object Representations from Self-Supervised Grasping
Figure 4 for Grasp2Vec: Learning Object Representations from Self-Supervised Grasping
Viaarxiv icon

Learning Actionable Representations with Goal-Conditioned Policies

Add code
Bookmark button
Alert button
Nov 19, 2018
Dibya Ghosh, Abhishek Gupta, Sergey Levine

Figure 1 for Learning Actionable Representations with Goal-Conditioned Policies
Figure 2 for Learning Actionable Representations with Goal-Conditioned Policies
Figure 3 for Learning Actionable Representations with Goal-Conditioned Policies
Figure 4 for Learning Actionable Representations with Goal-Conditioned Policies
Viaarxiv icon

Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models

Add code
Bookmark button
Alert button
Nov 02, 2018
Kurtland Chua, Roberto Calandra, Rowan McAllister, Sergey Levine

Figure 1 for Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models
Figure 2 for Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models
Figure 3 for Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models
Viaarxiv icon