Picture for Sergey Levine

Sergey Levine

Stanford University

Learning Powerful Policies by Using Consistent Dynamics Model

Add code
Jun 11, 2019
Figure 1 for Learning Powerful Policies by Using Consistent Dynamics Model
Figure 2 for Learning Powerful Policies by Using Consistent Dynamics Model
Figure 3 for Learning Powerful Policies by Using Consistent Dynamics Model
Figure 4 for Learning Powerful Policies by Using Consistent Dynamics Model
Viaarxiv icon

Watch, Try, Learn: Meta-Learning from Demonstrations and Reward

Add code
Jun 07, 2019
Figure 1 for Watch, Try, Learn: Meta-Learning from Demonstrations and Reward
Figure 2 for Watch, Try, Learn: Meta-Learning from Demonstrations and Reward
Figure 3 for Watch, Try, Learn: Meta-Learning from Demonstrations and Reward
Figure 4 for Watch, Try, Learn: Meta-Learning from Demonstrations and Reward
Viaarxiv icon

Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction

Add code
Jun 03, 2019
Figure 1 for Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction
Figure 2 for Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction
Figure 3 for Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction
Figure 4 for Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction
Viaarxiv icon

Extending Deep Model Predictive Control with Safety Augmented Value Estimation from Demonstrations

Add code
Jun 03, 2019
Figure 1 for Extending Deep Model Predictive Control with Safety Augmented Value Estimation from Demonstrations
Figure 2 for Extending Deep Model Predictive Control with Safety Augmented Value Estimation from Demonstrations
Figure 3 for Extending Deep Model Predictive Control with Safety Augmented Value Estimation from Demonstrations
Figure 4 for Extending Deep Model Predictive Control with Safety Augmented Value Estimation from Demonstrations
Viaarxiv icon

Causal Confusion in Imitation Learning

Add code
May 28, 2019
Figure 1 for Causal Confusion in Imitation Learning
Figure 2 for Causal Confusion in Imitation Learning
Figure 3 for Causal Confusion in Imitation Learning
Figure 4 for Causal Confusion in Imitation Learning
Viaarxiv icon

Adversarial Policies: Attacking Deep Reinforcement Learning

Add code
May 25, 2019
Figure 1 for Adversarial Policies: Attacking Deep Reinforcement Learning
Figure 2 for Adversarial Policies: Attacking Deep Reinforcement Learning
Figure 3 for Adversarial Policies: Attacking Deep Reinforcement Learning
Figure 4 for Adversarial Policies: Attacking Deep Reinforcement Learning
Viaarxiv icon

MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies

Add code
May 23, 2019
Figure 1 for MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies
Figure 2 for MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies
Figure 3 for MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies
Figure 4 for MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies
Viaarxiv icon

REPLAB: A Reproducible Low-Cost Arm Benchmark Platform for Robotic Learning

Add code
May 17, 2019
Figure 1 for REPLAB: A Reproducible Low-Cost Arm Benchmark Platform for Robotic Learning
Figure 2 for REPLAB: A Reproducible Low-Cost Arm Benchmark Platform for Robotic Learning
Figure 3 for REPLAB: A Reproducible Low-Cost Arm Benchmark Platform for Robotic Learning
Figure 4 for REPLAB: A Reproducible Low-Cost Arm Benchmark Platform for Robotic Learning
Viaarxiv icon

End-to-End Robotic Reinforcement Learning without Reward Engineering

Add code
May 16, 2019
Figure 1 for End-to-End Robotic Reinforcement Learning without Reward Engineering
Figure 2 for End-to-End Robotic Reinforcement Learning without Reward Engineering
Figure 3 for End-to-End Robotic Reinforcement Learning without Reward Engineering
Figure 4 for End-to-End Robotic Reinforcement Learning without Reward Engineering
Viaarxiv icon

PRECOG: PREdiction Conditioned On Goals in Visual Multi-Agent Settings

Add code
May 07, 2019
Figure 1 for PRECOG: PREdiction Conditioned On Goals in Visual Multi-Agent Settings
Figure 2 for PRECOG: PREdiction Conditioned On Goals in Visual Multi-Agent Settings
Figure 3 for PRECOG: PREdiction Conditioned On Goals in Visual Multi-Agent Settings
Figure 4 for PRECOG: PREdiction Conditioned On Goals in Visual Multi-Agent Settings
Viaarxiv icon