Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning?


Apr 12, 2022
Aviral Kumar, Joey Hong, Anikait Singh, Sergey Levine

Add code

* ICLR 2022. First two authors contributed equally 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization


Feb 17, 2022
Brandon Trabucco, Xinyang Geng, Aviral Kumar, Sergey Levine

Add code


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

How to Leverage Unlabeled Data in Offline Reinforcement Learning


Feb 03, 2022
Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Chelsea Finn, Sergey Levine

Add code


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization


Dec 09, 2021
Aviral Kumar, Rishabh Agarwal, Tengyu Ma, Aaron Courville, George Tucker, Sergey Levine

Add code


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Data-Driven Offline Optimization For Architecting Hardware Accelerators


Oct 20, 2021
Aviral Kumar, Amir Yazdanbakhsh, Milad Hashemi, Kevin Swersky, Sergey Levine

Add code

* First two authors contributed equally 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

A Workflow for Offline Model-Free Robotic Reinforcement Learning


Sep 23, 2021
Aviral Kumar, Anikait Singh, Stephen Tian, Chelsea Finn, Sergey Levine

Add code

* CoRL 2021. Project Website: https://sites.google.com/view/offline-rl-workflow. First two authors contributed equally 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Conservative Data Sharing for Multi-Task Offline Reinforcement Learning


Sep 16, 2021
Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Sergey Levine, Chelsea Finn

Add code


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Conservative Objective Models for Effective Offline Model-Based Optimization


Jul 14, 2021
Brandon Trabucco, Aviral Kumar, Xinyang Geng, Sergey Levine

Add code

* ICML 2021. First two authors contributed equally. Code at: https://github.com/brandontrabucco/design-baselines/blob/c65a53fe1e6567b740f0adf60c5db9921c1f2330/design_baselines/coms_cleaned/__init__.py 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability


Jul 13, 2021
Dibya Ghosh, Jad Rahme, Aviral Kumar, Amy Zhang, Ryan P. Adams, Sergey Levine

Add code

* First two authors contributed equally 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Benchmarks for Deep Off-Policy Evaluation


Mar 30, 2021
Justin Fu, Mohammad Norouzi, Ofir Nachum, George Tucker, Ziyu Wang, Alexander Novikov, Mengjiao Yang, Michael R. Zhang, Yutian Chen, Aviral Kumar, Cosmin Paduraru, Sergey Levine, Tom Le Paine

Add code

* ICLR 2021 paper. Policies and evaluation code are available at https://github.com/google-research/deep_ope 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email
<<
1
2
3
4
5
>>