Picture for Sergey Levine

Sergey Levine

Stanford University

Learning To Reach Goals Without Reinforcement Learning

Add code
Dec 13, 2019
Figure 1 for Learning To Reach Goals Without Reinforcement Learning
Figure 2 for Learning To Reach Goals Without Reinforcement Learning
Figure 3 for Learning To Reach Goals Without Reinforcement Learning
Figure 4 for Learning To Reach Goals Without Reinforcement Learning
Viaarxiv icon

SMiRL: Surprise Minimizing RL in Dynamic Environments

Add code
Dec 11, 2019
Figure 1 for SMiRL: Surprise Minimizing RL in Dynamic Environments
Figure 2 for SMiRL: Surprise Minimizing RL in Dynamic Environments
Figure 3 for SMiRL: Surprise Minimizing RL in Dynamic Environments
Figure 4 for SMiRL: Surprise Minimizing RL in Dynamic Environments
Viaarxiv icon

AVID: Learning Multi-Stage Tasks via Pixel-Level Translation of Human Videos

Add code
Dec 10, 2019
Figure 1 for AVID: Learning Multi-Stage Tasks via Pixel-Level Translation of Human Videos
Figure 2 for AVID: Learning Multi-Stage Tasks via Pixel-Level Translation of Human Videos
Figure 3 for AVID: Learning Multi-Stage Tasks via Pixel-Level Translation of Human Videos
Figure 4 for AVID: Learning Multi-Stage Tasks via Pixel-Level Translation of Human Videos
Viaarxiv icon

Unsupervised Curricula for Visual Meta-Reinforcement Learning

Add code
Dec 09, 2019
Figure 1 for Unsupervised Curricula for Visual Meta-Reinforcement Learning
Figure 2 for Unsupervised Curricula for Visual Meta-Reinforcement Learning
Figure 3 for Unsupervised Curricula for Visual Meta-Reinforcement Learning
Figure 4 for Unsupervised Curricula for Visual Meta-Reinforcement Learning
Viaarxiv icon

Learning Human Objectives by Evaluating Hypothetical Behavior

Add code
Dec 05, 2019
Figure 1 for Learning Human Objectives by Evaluating Hypothetical Behavior
Figure 2 for Learning Human Objectives by Evaluating Hypothetical Behavior
Figure 3 for Learning Human Objectives by Evaluating Hypothetical Behavior
Figure 4 for Learning Human Objectives by Evaluating Hypothetical Behavior
Viaarxiv icon

Entity Abstraction in Visual Model-Based Reinforcement Learning

Add code
Dec 04, 2019
Figure 1 for Entity Abstraction in Visual Model-Based Reinforcement Learning
Figure 2 for Entity Abstraction in Visual Model-Based Reinforcement Learning
Figure 3 for Entity Abstraction in Visual Model-Based Reinforcement Learning
Figure 4 for Entity Abstraction in Visual Model-Based Reinforcement Learning
Viaarxiv icon

Planning with Goal-Conditioned Policies

Add code
Nov 19, 2019
Figure 1 for Planning with Goal-Conditioned Policies
Figure 2 for Planning with Goal-Conditioned Policies
Figure 3 for Planning with Goal-Conditioned Policies
Figure 4 for Planning with Goal-Conditioned Policies
Viaarxiv icon

Plan Arithmetic: Compositional Plan Vectors for Multi-Task Control

Add code
Oct 30, 2019
Figure 1 for Plan Arithmetic: Compositional Plan Vectors for Multi-Task Control
Figure 2 for Plan Arithmetic: Compositional Plan Vectors for Multi-Task Control
Figure 3 for Plan Arithmetic: Compositional Plan Vectors for Multi-Task Control
Figure 4 for Plan Arithmetic: Compositional Plan Vectors for Multi-Task Control
Viaarxiv icon

Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning

Add code
Oct 25, 2019
Figure 1 for Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning
Figure 2 for Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning
Figure 3 for Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning
Figure 4 for Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning
Viaarxiv icon

RoboNet: Large-Scale Multi-Robot Learning

Add code
Oct 24, 2019
Figure 1 for RoboNet: Large-Scale Multi-Robot Learning
Figure 2 for RoboNet: Large-Scale Multi-Robot Learning
Figure 3 for RoboNet: Large-Scale Multi-Robot Learning
Figure 4 for RoboNet: Large-Scale Multi-Robot Learning
Viaarxiv icon