Picture for Sergey Levine

Sergey Levine

Stanford University

Rearrangement: A Challenge for Embodied AI

Add code
Nov 03, 2020
Figure 1 for Rearrangement: A Challenge for Embodied AI
Figure 2 for Rearrangement: A Challenge for Embodied AI
Figure 3 for Rearrangement: A Challenge for Embodied AI
Figure 4 for Rearrangement: A Challenge for Embodied AI
Viaarxiv icon

COG: Connecting New Skills to Past Experience with Offline Reinforcement Learning

Add code
Oct 27, 2020
Figure 1 for COG: Connecting New Skills to Past Experience with Offline Reinforcement Learning
Figure 2 for COG: Connecting New Skills to Past Experience with Offline Reinforcement Learning
Figure 3 for COG: Connecting New Skills to Past Experience with Offline Reinforcement Learning
Figure 4 for COG: Connecting New Skills to Past Experience with Offline Reinforcement Learning
Viaarxiv icon

Implicit Under-Parameterization Inhibits Data-Efficient Deep Reinforcement Learning

Add code
Oct 27, 2020
Figure 1 for Implicit Under-Parameterization Inhibits Data-Efficient Deep Reinforcement Learning
Figure 2 for Implicit Under-Parameterization Inhibits Data-Efficient Deep Reinforcement Learning
Figure 3 for Implicit Under-Parameterization Inhibits Data-Efficient Deep Reinforcement Learning
Figure 4 for Implicit Under-Parameterization Inhibits Data-Efficient Deep Reinforcement Learning
Viaarxiv icon

Conservative Safety Critics for Exploration

Add code
Oct 27, 2020
Figure 1 for Conservative Safety Critics for Exploration
Figure 2 for Conservative Safety Critics for Exploration
Figure 3 for Conservative Safety Critics for Exploration
Figure 4 for Conservative Safety Critics for Exploration
Viaarxiv icon

$γ$-Models: Generative Temporal Difference Learning for Infinite-Horizon Prediction

Add code
Oct 27, 2020
Figure 1 for $γ$-Models: Generative Temporal Difference Learning for Infinite-Horizon Prediction
Figure 2 for $γ$-Models: Generative Temporal Difference Learning for Infinite-Horizon Prediction
Figure 3 for $γ$-Models: Generative Temporal Difference Learning for Infinite-Horizon Prediction
Figure 4 for $γ$-Models: Generative Temporal Difference Learning for Infinite-Horizon Prediction
Viaarxiv icon

One Solution is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL

Add code
Oct 27, 2020
Figure 1 for One Solution is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL
Figure 2 for One Solution is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL
Figure 3 for One Solution is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL
Viaarxiv icon

OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning

Add code
Oct 27, 2020
Figure 1 for OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning
Figure 2 for OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning
Figure 3 for OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning
Figure 4 for OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning
Viaarxiv icon

MELD: Meta-Reinforcement Learning from Images via Latent State Models

Add code
Oct 26, 2020
Figure 1 for MELD: Meta-Reinforcement Learning from Images via Latent State Models
Figure 2 for MELD: Meta-Reinforcement Learning from Images via Latent State Models
Figure 3 for MELD: Meta-Reinforcement Learning from Images via Latent State Models
Figure 4 for MELD: Meta-Reinforcement Learning from Images via Latent State Models
Viaarxiv icon

LaND: Learning to Navigate from Disengagements

Add code
Oct 09, 2020
Figure 1 for LaND: Learning to Navigate from Disengagements
Figure 2 for LaND: Learning to Navigate from Disengagements
Figure 3 for LaND: Learning to Navigate from Disengagements
Figure 4 for LaND: Learning to Navigate from Disengagements
Viaarxiv icon

Multi-agent Social Reinforcement Learning Improves Generalization

Add code
Oct 01, 2020
Figure 1 for Multi-agent Social Reinforcement Learning Improves Generalization
Figure 2 for Multi-agent Social Reinforcement Learning Improves Generalization
Figure 3 for Multi-agent Social Reinforcement Learning Improves Generalization
Figure 4 for Multi-agent Social Reinforcement Learning Improves Generalization
Viaarxiv icon