Alert button
Picture for Sergey Levine

Sergey Levine

Alert button

AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control

Add code
Bookmark button
Alert button
Apr 05, 2021
Xue Bin Peng, Ze Ma, Pieter Abbeel, Sergey Levine, Angjoo Kanazawa

Figure 1 for AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control
Figure 2 for AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control
Figure 3 for AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control
Figure 4 for AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control
Viaarxiv icon

Benchmarks for Deep Off-Policy Evaluation

Add code
Bookmark button
Alert button
Mar 30, 2021
Justin Fu, Mohammad Norouzi, Ofir Nachum, George Tucker, Ziyu Wang, Alexander Novikov, Mengjiao Yang, Michael R. Zhang, Yutian Chen, Aviral Kumar, Cosmin Paduraru, Sergey Levine, Tom Le Paine

Figure 1 for Benchmarks for Deep Off-Policy Evaluation
Figure 2 for Benchmarks for Deep Off-Policy Evaluation
Figure 3 for Benchmarks for Deep Off-Policy Evaluation
Figure 4 for Benchmarks for Deep Off-Policy Evaluation
Viaarxiv icon

Reinforcement Learning for Robust Parameterized Locomotion Control of Bipedal Robots

Add code
Bookmark button
Alert button
Mar 26, 2021
Zhongyu Li, Xuxin Cheng, Xue Bin Peng, Pieter Abbeel, Sergey Levine, Glen Berseth, Koushil Sreenath

Figure 1 for Reinforcement Learning for Robust Parameterized Locomotion Control of Bipedal Robots
Figure 2 for Reinforcement Learning for Robust Parameterized Locomotion Control of Bipedal Robots
Figure 3 for Reinforcement Learning for Robust Parameterized Locomotion Control of Bipedal Robots
Figure 4 for Reinforcement Learning for Robust Parameterized Locomotion Control of Bipedal Robots
Viaarxiv icon

Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning

Add code
Bookmark button
Alert button
Mar 23, 2021
Hiroki Furuta, Tatsuya Matsushima, Tadashi Kozuno, Yutaka Matsuo, Sergey Levine, Ofir Nachum, Shixiang Shane Gu

Figure 1 for Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning
Figure 2 for Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning
Figure 3 for Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning
Figure 4 for Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning
Viaarxiv icon

Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification

Add code
Bookmark button
Alert button
Mar 23, 2021
Benjamin Eysenbach, Sergey Levine, Ruslan Salakhutdinov

Figure 1 for Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification
Figure 2 for Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification
Figure 3 for Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification
Figure 4 for Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification
Viaarxiv icon

Maximum Entropy RL (Provably) Solves Some Robust RL Problems

Add code
Bookmark button
Alert button
Mar 10, 2021
Benjamin Eysenbach, Sergey Levine

Figure 1 for Maximum Entropy RL (Provably) Solves Some Robust RL Problems
Figure 2 for Maximum Entropy RL (Provably) Solves Some Robust RL Problems
Figure 3 for Maximum Entropy RL (Provably) Solves Some Robust RL Problems
Figure 4 for Maximum Entropy RL (Provably) Solves Some Robust RL Problems
Viaarxiv icon

PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning

Add code
Bookmark button
Alert button
Feb 24, 2021
Angelos Filos, Clare Lyle, Yarin Gal, Sergey Levine, Natasha Jaques, Gregory Farquhar

Figure 1 for PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning
Figure 2 for PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning
Figure 3 for PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning
Figure 4 for PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning
Viaarxiv icon

COMBO: Conservative Offline Model-Based Policy Optimization

Add code
Bookmark button
Alert button
Feb 16, 2021
Tianhe Yu, Aviral Kumar, Rafael Rafailov, Aravind Rajeswaran, Sergey Levine, Chelsea Finn

Figure 1 for COMBO: Conservative Offline Model-Based Policy Optimization
Figure 2 for COMBO: Conservative Offline Model-Based Policy Optimization
Figure 3 for COMBO: Conservative Offline Model-Based Policy Optimization
Figure 4 for COMBO: Conservative Offline Model-Based Policy Optimization
Viaarxiv icon

Offline Model-Based Optimization via Normalized Maximum Likelihood Estimation

Add code
Bookmark button
Alert button
Feb 16, 2021
Justin Fu, Sergey Levine

Figure 1 for Offline Model-Based Optimization via Normalized Maximum Likelihood Estimation
Figure 2 for Offline Model-Based Optimization via Normalized Maximum Likelihood Estimation
Figure 3 for Offline Model-Based Optimization via Normalized Maximum Likelihood Estimation
Figure 4 for Offline Model-Based Optimization via Normalized Maximum Likelihood Estimation
Viaarxiv icon

How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned

Add code
Bookmark button
Alert button
Feb 04, 2021
Julian Ibarz, Jie Tan, Chelsea Finn, Mrinal Kalakrishnan, Peter Pastor, Sergey Levine

Figure 1 for How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned
Figure 2 for How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned
Figure 3 for How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned
Figure 4 for How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned
Viaarxiv icon