Alert button
Picture for Sergey Levine

Sergey Levine

Alert button

Understanding the Complexity Gains of Single-Task RL with a Curriculum

Add code
Bookmark button
Alert button
Dec 24, 2022
Qiyang Li, Yuexiang Zhai, Yi Ma, Sergey Levine

Figure 1 for Understanding the Complexity Gains of Single-Task RL with a Curriculum
Figure 2 for Understanding the Complexity Gains of Single-Task RL with a Curriculum
Figure 3 for Understanding the Complexity Gains of Single-Task RL with a Curriculum
Figure 4 for Understanding the Complexity Gains of Single-Task RL with a Curriculum
Viaarxiv icon

Imitation Is Not Enough: Robustifying Imitation with Reinforcement Learning for Challenging Driving Scenarios

Add code
Bookmark button
Alert button
Dec 21, 2022
Yiren Lu, Justin Fu, George Tucker, Xinlei Pan, Eli Bronstein, Becca Roelofs, Benjamin Sapp, Brandyn White, Aleksandra Faust, Shimon Whiteson, Dragomir Anguelov, Sergey Levine

Figure 1 for Imitation Is Not Enough: Robustifying Imitation with Reinforcement Learning for Challenging Driving Scenarios
Figure 2 for Imitation Is Not Enough: Robustifying Imitation with Reinforcement Learning for Challenging Driving Scenarios
Figure 3 for Imitation Is Not Enough: Robustifying Imitation with Reinforcement Learning for Challenging Driving Scenarios
Figure 4 for Imitation Is Not Enough: Robustifying Imitation with Reinforcement Learning for Challenging Driving Scenarios
Viaarxiv icon

Dexterous Manipulation from Images: Autonomous Real-World RL via Substep Guidance

Add code
Bookmark button
Alert button
Dec 19, 2022
Kelvin Xu, Zheyuan Hu, Ria Doshi, Aaron Rovinsky, Vikash Kumar, Abhishek Gupta, Sergey Levine

Figure 1 for Dexterous Manipulation from Images: Autonomous Real-World RL via Substep Guidance
Figure 2 for Dexterous Manipulation from Images: Autonomous Real-World RL via Substep Guidance
Figure 3 for Dexterous Manipulation from Images: Autonomous Real-World RL via Substep Guidance
Figure 4 for Dexterous Manipulation from Images: Autonomous Real-World RL via Substep Guidance
Viaarxiv icon

Offline Reinforcement Learning for Visual Navigation

Add code
Bookmark button
Alert button
Dec 16, 2022
Dhruv Shah, Arjun Bhorkar, Hrish Leen, Ilya Kostrikov, Nick Rhinehart, Sergey Levine

Figure 1 for Offline Reinforcement Learning for Visual Navigation
Figure 2 for Offline Reinforcement Learning for Visual Navigation
Figure 3 for Offline Reinforcement Learning for Visual Navigation
Figure 4 for Offline Reinforcement Learning for Visual Navigation
Viaarxiv icon

RT-1: Robotics Transformer for Real-World Control at Scale

Add code
Bookmark button
Alert button
Dec 13, 2022
Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Joseph Dabis, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Tomas Jackson, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Kuang-Huei Lee, Sergey Levine, Yao Lu, Utsav Malla, Deeksha Manjunath, Igor Mordatch, Ofir Nachum, Carolina Parada, Jodilyn Peralta, Emily Perez, Karl Pertsch, Jornell Quiambao, Kanishka Rao, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Kevin Sayed, Jaspiar Singh, Sumedh Sontakke, Austin Stone, Clayton Tan, Huong Tran, Vincent Vanhoucke, Steve Vega, Quan Vuong, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich

Figure 1 for RT-1: Robotics Transformer for Real-World Control at Scale
Figure 2 for RT-1: Robotics Transformer for Real-World Control at Scale
Figure 3 for RT-1: Robotics Transformer for Real-World Control at Scale
Figure 4 for RT-1: Robotics Transformer for Real-World Control at Scale
Viaarxiv icon

Learning Robotic Navigation from Experience: Principles, Methods, and Recent Results

Add code
Bookmark button
Alert button
Dec 13, 2022
Sergey Levine, Dhruv Shah

Figure 1 for Learning Robotic Navigation from Experience: Principles, Methods, and Recent Results
Figure 2 for Learning Robotic Navigation from Experience: Principles, Methods, and Recent Results
Figure 3 for Learning Robotic Navigation from Experience: Principles, Methods, and Recent Results
Figure 4 for Learning Robotic Navigation from Experience: Principles, Methods, and Recent Results
Viaarxiv icon

Confidence-Conditioned Value Functions for Offline Reinforcement Learning

Add code
Bookmark button
Alert button
Dec 08, 2022
Joey Hong, Aviral Kumar, Sergey Levine

Figure 1 for Confidence-Conditioned Value Functions for Offline Reinforcement Learning
Figure 2 for Confidence-Conditioned Value Functions for Offline Reinforcement Learning
Figure 3 for Confidence-Conditioned Value Functions for Offline Reinforcement Learning
Figure 4 for Confidence-Conditioned Value Functions for Offline Reinforcement Learning
Viaarxiv icon

Multi-Task Imitation Learning for Linear Dynamical Systems

Add code
Bookmark button
Alert button
Dec 01, 2022
Thomas T. Zhang, Katie Kang, Bruce D. Lee, Claire Tomlin, Sergey Levine, Stephen Tu, Nikolai Matni

Figure 1 for Multi-Task Imitation Learning for Linear Dynamical Systems
Figure 2 for Multi-Task Imitation Learning for Linear Dynamical Systems
Viaarxiv icon

Offline Q-Learning on Diverse Multi-Task Data Both Scales And Generalizes

Add code
Bookmark button
Alert button
Nov 28, 2022
Aviral Kumar, Rishabh Agarwal, Xinyang Geng, George Tucker, Sergey Levine

Figure 1 for Offline Q-Learning on Diverse Multi-Task Data Both Scales And Generalizes
Figure 2 for Offline Q-Learning on Diverse Multi-Task Data Both Scales And Generalizes
Figure 3 for Offline Q-Learning on Diverse Multi-Task Data Both Scales And Generalizes
Figure 4 for Offline Q-Learning on Diverse Multi-Task Data Both Scales And Generalizes
Viaarxiv icon

Data-Driven Offline Decision-Making via Invariant Representation Learning

Add code
Bookmark button
Alert button
Nov 25, 2022
Han Qi, Yi Su, Aviral Kumar, Sergey Levine

Figure 1 for Data-Driven Offline Decision-Making via Invariant Representation Learning
Figure 2 for Data-Driven Offline Decision-Making via Invariant Representation Learning
Figure 3 for Data-Driven Offline Decision-Making via Invariant Representation Learning
Figure 4 for Data-Driven Offline Decision-Making via Invariant Representation Learning
Viaarxiv icon