Alert button
Picture for Sergey Levine

Sergey Levine

Alert button

Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills

Add code
Bookmark button
Alert button
Apr 28, 2021
Yevgen Chebotar, Karol Hausman, Yao Lu, Ted Xiao, Dmitry Kalashnikov, Jake Varley, Alex Irpan, Benjamin Eysenbach, Ryan Julian, Chelsea Finn, Sergey Levine

Figure 1 for Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills
Figure 2 for Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills
Figure 3 for Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills
Figure 4 for Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills
Viaarxiv icon

MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale

Add code
Bookmark button
Alert button
Apr 27, 2021
Dmitry Kalashnikov, Jacob Varley, Yevgen Chebotar, Benjamin Swanson, Rico Jonschkowski, Chelsea Finn, Sergey Levine, Karol Hausman

Figure 1 for MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale
Figure 2 for MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale
Figure 3 for MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale
Figure 4 for MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale
Viaarxiv icon

DisCo RL: Distribution-Conditioned Reinforcement Learning for General-Purpose Policies

Add code
Bookmark button
Alert button
Apr 23, 2021
Soroush Nasiriany, Vitchyr H. Pong, Ashvin Nair, Alexander Khazatsky, Glen Berseth, Sergey Levine

Figure 1 for DisCo RL: Distribution-Conditioned Reinforcement Learning for General-Purpose Policies
Figure 2 for DisCo RL: Distribution-Conditioned Reinforcement Learning for General-Purpose Policies
Figure 3 for DisCo RL: Distribution-Conditioned Reinforcement Learning for General-Purpose Policies
Figure 4 for DisCo RL: Distribution-Conditioned Reinforcement Learning for General-Purpose Policies
Viaarxiv icon

Reset-Free Reinforcement Learning via Multi-Task Learning: Learning Dexterous Manipulation Behaviors without Human Intervention

Add code
Bookmark button
Alert button
Apr 22, 2021
Abhishek Gupta, Justin Yu, Tony Z. Zhao, Vikash Kumar, Aaron Rovinsky, Kelvin Xu, Thomas Devlin, Sergey Levine

Figure 1 for Reset-Free Reinforcement Learning via Multi-Task Learning: Learning Dexterous Manipulation Behaviors without Human Intervention
Figure 2 for Reset-Free Reinforcement Learning via Multi-Task Learning: Learning Dexterous Manipulation Behaviors without Human Intervention
Figure 3 for Reset-Free Reinforcement Learning via Multi-Task Learning: Learning Dexterous Manipulation Behaviors without Human Intervention
Figure 4 for Reset-Free Reinforcement Learning via Multi-Task Learning: Learning Dexterous Manipulation Behaviors without Human Intervention
Viaarxiv icon

Contingencies from Observations: Tractable Contingency Planning with Learned Behavior Models

Add code
Bookmark button
Alert button
Apr 21, 2021
Nicholas Rhinehart, Jeff He, Charles Packer, Matthew A. Wright, Rowan McAllister, Joseph E. Gonzalez, Sergey Levine

Figure 1 for Contingencies from Observations: Tractable Contingency Planning with Learned Behavior Models
Figure 2 for Contingencies from Observations: Tractable Contingency Planning with Learned Behavior Models
Figure 3 for Contingencies from Observations: Tractable Contingency Planning with Learned Behavior Models
Figure 4 for Contingencies from Observations: Tractable Contingency Planning with Learned Behavior Models
Viaarxiv icon

Outcome-Driven Reinforcement Learning via Variational Inference

Add code
Bookmark button
Alert button
Apr 20, 2021
Tim G. J. Rudner, Vitchyr H. Pong, Rowan McAllister, Yarin Gal, Sergey Levine

Figure 1 for Outcome-Driven Reinforcement Learning via Variational Inference
Figure 2 for Outcome-Driven Reinforcement Learning via Variational Inference
Figure 3 for Outcome-Driven Reinforcement Learning via Variational Inference
Figure 4 for Outcome-Driven Reinforcement Learning via Variational Inference
Viaarxiv icon

RECON: Rapid Exploration for Open-World Navigation with Latent Goal Models

Add code
Bookmark button
Alert button
Apr 14, 2021
Dhruv Shah, Benjamin Eysenbach, Nicholas Rhinehart, Sergey Levine

Figure 1 for RECON: Rapid Exploration for Open-World Navigation with Latent Goal Models
Figure 2 for RECON: Rapid Exploration for Open-World Navigation with Latent Goal Models
Figure 3 for RECON: Rapid Exploration for Open-World Navigation with Latent Goal Models
Figure 4 for RECON: Rapid Exploration for Open-World Navigation with Latent Goal Models
Viaarxiv icon