Alert button
Picture for Sergey Levine

Sergey Levine

Alert button

Reasoning About Physical Interactions with Object-Oriented Prediction and Planning

Add code
Bookmark button
Alert button
Jan 07, 2019
Michael Janner, Sergey Levine, William T. Freeman, Joshua B. Tenenbaum, Chelsea Finn, Jiajun Wu

Figure 1 for Reasoning About Physical Interactions with Object-Oriented Prediction and Planning
Figure 2 for Reasoning About Physical Interactions with Object-Oriented Prediction and Planning
Figure 3 for Reasoning About Physical Interactions with Object-Oriented Prediction and Planning
Figure 4 for Reasoning About Physical Interactions with Object-Oriented Prediction and Planning
Viaarxiv icon

Robustness to Out-of-Distribution Inputs via Task-Aware Generative Uncertainty

Add code
Bookmark button
Alert button
Dec 27, 2018
Rowan McAllister, Gregory Kahn, Jeff Clune, Sergey Levine

Figure 1 for Robustness to Out-of-Distribution Inputs via Task-Aware Generative Uncertainty
Figure 2 for Robustness to Out-of-Distribution Inputs via Task-Aware Generative Uncertainty
Figure 3 for Robustness to Out-of-Distribution Inputs via Task-Aware Generative Uncertainty
Figure 4 for Robustness to Out-of-Distribution Inputs via Task-Aware Generative Uncertainty
Viaarxiv icon

Learning to Walk via Deep Reinforcement Learning

Add code
Bookmark button
Alert button
Dec 26, 2018
Tuomas Haarnoja, Aurick Zhou, Sehoon Ha, Jie Tan, George Tucker, Sergey Levine

Figure 1 for Learning to Walk via Deep Reinforcement Learning
Figure 2 for Learning to Walk via Deep Reinforcement Learning
Figure 3 for Learning to Walk via Deep Reinforcement Learning
Figure 4 for Learning to Walk via Deep Reinforcement Learning
Viaarxiv icon

Residual Reinforcement Learning for Robot Control

Add code
Bookmark button
Alert button
Dec 18, 2018
Tobias Johannink, Shikhar Bahl, Ashvin Nair, Jianlan Luo, Avinash Kumar, Matthias Loskyll, Juan Aparicio Ojea, Eugen Solowjow, Sergey Levine

Figure 1 for Residual Reinforcement Learning for Robot Control
Figure 2 for Residual Reinforcement Learning for Robot Control
Figure 3 for Residual Reinforcement Learning for Robot Control
Figure 4 for Residual Reinforcement Learning for Robot Control
Viaarxiv icon

Deep Online Learning via Meta-Learning: Continual Adaptation for Model-Based RL

Add code
Bookmark button
Alert button
Dec 18, 2018
Anusha Nagabandi, Chelsea Finn, Sergey Levine

Figure 1 for Deep Online Learning via Meta-Learning: Continual Adaptation for Model-Based RL
Figure 2 for Deep Online Learning via Meta-Learning: Continual Adaptation for Model-Based RL
Figure 3 for Deep Online Learning via Meta-Learning: Continual Adaptation for Model-Based RL
Figure 4 for Deep Online Learning via Meta-Learning: Continual Adaptation for Model-Based RL
Viaarxiv icon

Sim-to-Real via Sim-to-Sim: Data-efficient Robotic Grasping via Randomized-to-Canonical Adaptation Networks

Add code
Bookmark button
Alert button
Dec 18, 2018
Stephen James, Paul Wohlhart, Mrinal Kalakrishnan, Dmitry Kalashnikov, Alex Irpan, Julian Ibarz, Sergey Levine, Raia Hadsell, Konstantinos Bousmalis

Figure 1 for Sim-to-Real via Sim-to-Sim: Data-efficient Robotic Grasping via Randomized-to-Canonical Adaptation Networks
Figure 2 for Sim-to-Real via Sim-to-Sim: Data-efficient Robotic Grasping via Randomized-to-Canonical Adaptation Networks
Figure 3 for Sim-to-Real via Sim-to-Sim: Data-efficient Robotic Grasping via Randomized-to-Canonical Adaptation Networks
Figure 4 for Sim-to-Real via Sim-to-Sim: Data-efficient Robotic Grasping via Randomized-to-Canonical Adaptation Networks
Viaarxiv icon

Soft Actor-Critic Algorithms and Applications

Add code
Bookmark button
Alert button
Dec 13, 2018
Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, Sergey Levine

Figure 1 for Soft Actor-Critic Algorithms and Applications
Figure 2 for Soft Actor-Critic Algorithms and Applications
Figure 3 for Soft Actor-Critic Algorithms and Applications
Figure 4 for Soft Actor-Critic Algorithms and Applications
Viaarxiv icon

Visual Reinforcement Learning with Imagined Goals

Add code
Bookmark button
Alert button
Dec 04, 2018
Ashvin Nair, Vitchyr Pong, Murtaza Dalal, Shikhar Bahl, Steven Lin, Sergey Levine

Figure 1 for Visual Reinforcement Learning with Imagined Goals
Figure 2 for Visual Reinforcement Learning with Imagined Goals
Figure 3 for Visual Reinforcement Learning with Imagined Goals
Figure 4 for Visual Reinforcement Learning with Imagined Goals
Viaarxiv icon

Visual Memory for Robust Path Following

Add code
Bookmark button
Alert button
Dec 03, 2018
Ashish Kumar, Saurabh Gupta, David Fouhey, Sergey Levine, Jitendra Malik

Figure 1 for Visual Memory for Robust Path Following
Figure 2 for Visual Memory for Robust Path Following
Figure 3 for Visual Memory for Robust Path Following
Figure 4 for Visual Memory for Robust Path Following
Viaarxiv icon