Alert button
Picture for Sergey Levine

Sergey Levine

Alert button

Data-Efficient Hierarchical Reinforcement Learning

Add code
Bookmark button
Alert button
Oct 05, 2018
Ofir Nachum, Shixiang Gu, Honglak Lee, Sergey Levine

Figure 1 for Data-Efficient Hierarchical Reinforcement Learning
Figure 2 for Data-Efficient Hierarchical Reinforcement Learning
Figure 3 for Data-Efficient Hierarchical Reinforcement Learning
Figure 4 for Data-Efficient Hierarchical Reinforcement Learning
Viaarxiv icon

EMI: Exploration with Mutual Information Maximizing State and Action Embeddings

Add code
Bookmark button
Alert button
Oct 04, 2018
Hyoungseok Kim, Jaekyeom Kim, Yeonwoo Jeong, Sergey Levine, Hyun Oh Song

Figure 1 for EMI: Exploration with Mutual Information Maximizing State and Action Embeddings
Figure 2 for EMI: Exploration with Mutual Information Maximizing State and Action Embeddings
Figure 3 for EMI: Exploration with Mutual Information Maximizing State and Action Embeddings
Figure 4 for EMI: Exploration with Mutual Information Maximizing State and Action Embeddings
Viaarxiv icon

Near-Optimal Representation Learning for Hierarchical Reinforcement Learning

Add code
Bookmark button
Alert button
Oct 02, 2018
Ofir Nachum, Shixiang Gu, Honglak Lee, Sergey Levine

Figure 1 for Near-Optimal Representation Learning for Hierarchical Reinforcement Learning
Figure 2 for Near-Optimal Representation Learning for Hierarchical Reinforcement Learning
Figure 3 for Near-Optimal Representation Learning for Hierarchical Reinforcement Learning
Figure 4 for Near-Optimal Representation Learning for Hierarchical Reinforcement Learning
Viaarxiv icon

Time Reversal as Self-Supervision

Add code
Bookmark button
Alert button
Oct 02, 2018
Suraj Nair, Mohammad Babaeizadeh, Chelsea Finn, Sergey Levine, Vikash Kumar

Figure 1 for Time Reversal as Self-Supervision
Figure 2 for Time Reversal as Self-Supervision
Figure 3 for Time Reversal as Self-Supervision
Figure 4 for Time Reversal as Self-Supervision
Viaarxiv icon

Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow

Add code
Bookmark button
Alert button
Oct 01, 2018
Xue Bin Peng, Angjoo Kanazawa, Sam Toyer, Pieter Abbeel, Sergey Levine

Figure 1 for Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow
Figure 2 for Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow
Figure 3 for Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow
Figure 4 for Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow
Viaarxiv icon

Few-Shot Goal Inference for Visuomotor Learning and Planning

Add code
Bookmark button
Alert button
Sep 30, 2018
Annie Xie, Avi Singh, Sergey Levine, Chelsea Finn

Figure 1 for Few-Shot Goal Inference for Visuomotor Learning and Planning
Figure 2 for Few-Shot Goal Inference for Visuomotor Learning and Planning
Figure 3 for Few-Shot Goal Inference for Visuomotor Learning and Planning
Figure 4 for Few-Shot Goal Inference for Visuomotor Learning and Planning
Viaarxiv icon

Latent Space Policies for Hierarchical Reinforcement Learning

Add code
Bookmark button
Alert button
Sep 03, 2018
Tuomas Haarnoja, Kristian Hartikainen, Pieter Abbeel, Sergey Levine

Figure 1 for Latent Space Policies for Hierarchical Reinforcement Learning
Figure 2 for Latent Space Policies for Hierarchical Reinforcement Learning
Figure 3 for Latent Space Policies for Hierarchical Reinforcement Learning
Figure 4 for Latent Space Policies for Hierarchical Reinforcement Learning
Viaarxiv icon

SOLAR: Deep Structured Latent Representations for Model-Based Reinforcement Learning

Add code
Bookmark button
Alert button
Aug 28, 2018
Marvin Zhang, Sharad Vikram, Laura Smith, Pieter Abbeel, Matthew J. Johnson, Sergey Levine

Figure 1 for SOLAR: Deep Structured Latent Representations for Model-Based Reinforcement Learning
Figure 2 for SOLAR: Deep Structured Latent Representations for Model-Based Reinforcement Learning
Figure 3 for SOLAR: Deep Structured Latent Representations for Model-Based Reinforcement Learning
Figure 4 for SOLAR: Deep Structured Latent Representations for Model-Based Reinforcement Learning
Viaarxiv icon

Learning Robust Rewards with Adversarial Inverse Reinforcement Learning

Add code
Bookmark button
Alert button
Aug 13, 2018
Justin Fu, Katie Luo, Sergey Levine

Figure 1 for Learning Robust Rewards with Adversarial Inverse Reinforcement Learning
Figure 2 for Learning Robust Rewards with Adversarial Inverse Reinforcement Learning
Figure 3 for Learning Robust Rewards with Adversarial Inverse Reinforcement Learning
Figure 4 for Learning Robust Rewards with Adversarial Inverse Reinforcement Learning
Viaarxiv icon

Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor

Add code
Bookmark button
Alert button
Aug 08, 2018
Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, Sergey Levine

Figure 1 for Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
Figure 2 for Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
Figure 3 for Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
Figure 4 for Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
Viaarxiv icon