Picture for Sergey Levine

Sergey Levine

Stanford University

Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability

Add code
Jul 13, 2021
Figure 1 for Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability
Figure 2 for Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability
Figure 3 for Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability
Figure 4 for Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability
Viaarxiv icon

Explore and Control with Adversarial Surprise

Add code
Jul 12, 2021
Figure 1 for Explore and Control with Adversarial Surprise
Figure 2 for Explore and Control with Adversarial Surprise
Figure 3 for Explore and Control with Adversarial Surprise
Figure 4 for Explore and Control with Adversarial Surprise
Viaarxiv icon

Pragmatic Image Compression for Human-in-the-Loop Decision-Making

Add code
Jul 07, 2021
Figure 1 for Pragmatic Image Compression for Human-in-the-Loop Decision-Making
Figure 2 for Pragmatic Image Compression for Human-in-the-Loop Decision-Making
Figure 3 for Pragmatic Image Compression for Human-in-the-Loop Decision-Making
Figure 4 for Pragmatic Image Compression for Human-in-the-Loop Decision-Making
Viaarxiv icon

Multi-Robot Deep Reinforcement Learning for Mobile Navigation

Add code
Jun 24, 2021
Figure 1 for Multi-Robot Deep Reinforcement Learning for Mobile Navigation
Figure 2 for Multi-Robot Deep Reinforcement Learning for Mobile Navigation
Figure 3 for Multi-Robot Deep Reinforcement Learning for Mobile Navigation
Figure 4 for Multi-Robot Deep Reinforcement Learning for Mobile Navigation
Viaarxiv icon

Model-Based Reinforcement Learning via Latent-Space Collocation

Add code
Jun 24, 2021
Figure 1 for Model-Based Reinforcement Learning via Latent-Space Collocation
Figure 2 for Model-Based Reinforcement Learning via Latent-Space Collocation
Figure 3 for Model-Based Reinforcement Learning via Latent-Space Collocation
Figure 4 for Model-Based Reinforcement Learning via Latent-Space Collocation
Viaarxiv icon

FitVid: Overfitting in Pixel-Level Video Prediction

Add code
Jun 24, 2021
Figure 1 for FitVid: Overfitting in Pixel-Level Video Prediction
Figure 2 for FitVid: Overfitting in Pixel-Level Video Prediction
Figure 3 for FitVid: Overfitting in Pixel-Level Video Prediction
Figure 4 for FitVid: Overfitting in Pixel-Level Video Prediction
Viaarxiv icon

Which Mutual-Information Representation Learning Objectives are Sufficient for Control?

Add code
Jun 14, 2021
Figure 1 for Which Mutual-Information Representation Learning Objectives are Sufficient for Control?
Figure 2 for Which Mutual-Information Representation Learning Objectives are Sufficient for Control?
Figure 3 for Which Mutual-Information Representation Learning Objectives are Sufficient for Control?
Figure 4 for Which Mutual-Information Representation Learning Objectives are Sufficient for Control?
Viaarxiv icon

What Can I Do Here? Learning New Skills by Imagining Visual Affordances

Add code
Jun 13, 2021
Figure 1 for What Can I Do Here? Learning New Skills by Imagining Visual Affordances
Figure 2 for What Can I Do Here? Learning New Skills by Imagining Visual Affordances
Figure 3 for What Can I Do Here? Learning New Skills by Imagining Visual Affordances
Figure 4 for What Can I Do Here? Learning New Skills by Imagining Visual Affordances
Viaarxiv icon

Reinforcement Learning as One Big Sequence Modeling Problem

Add code
Jun 03, 2021
Figure 1 for Reinforcement Learning as One Big Sequence Modeling Problem
Figure 2 for Reinforcement Learning as One Big Sequence Modeling Problem
Figure 3 for Reinforcement Learning as One Big Sequence Modeling Problem
Figure 4 for Reinforcement Learning as One Big Sequence Modeling Problem
Viaarxiv icon

Variational Empowerment as Representation Learning for Goal-Based Reinforcement Learning

Add code
Jun 02, 2021
Figure 1 for Variational Empowerment as Representation Learning for Goal-Based Reinforcement Learning
Figure 2 for Variational Empowerment as Representation Learning for Goal-Based Reinforcement Learning
Figure 3 for Variational Empowerment as Representation Learning for Goal-Based Reinforcement Learning
Figure 4 for Variational Empowerment as Representation Learning for Goal-Based Reinforcement Learning
Viaarxiv icon