Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

Picture for Tianhe Yu

Conservative Data Sharing for Multi-Task Offline Reinforcement Learning


Sep 16, 2021
Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Sergey Levine, Chelsea Finn


  Access Paper or Ask Questions

Efficiently Identifying Task Groupings for Multi-Task Learning


Sep 10, 2021
Christopher Fifty, Ehsan Amid, Zhe Zhao, Tianhe Yu, Rohan Anil, Chelsea Finn


  Access Paper or Ask Questions

Visual Adversarial Imitation Learning using Variational Models


Jul 16, 2021
Rafael Rafailov, Tianhe Yu, Aravind Rajeswaran, Chelsea Finn


  Access Paper or Ask Questions

COMBO: Conservative Offline Model-Based Policy Optimization


Feb 16, 2021
Tianhe Yu, Aviral Kumar, Rafael Rafailov, Aravind Rajeswaran, Sergey Levine, Chelsea Finn


  Access Paper or Ask Questions

Offline Reinforcement Learning from Images with Latent Space Models


Dec 21, 2020
Rafael Rafailov, Tianhe Yu, Aravind Rajeswaran, Chelsea Finn


  Access Paper or Ask Questions

Variable-Shot Adaptation for Online Meta-Learning


Dec 14, 2020
Tianhe Yu, Xinyang Geng, Chelsea Finn, Sergey Levine

* First two authors contribute equally 

  Access Paper or Ask Questions

Measuring and Harnessing Transference in Multi-Task Learning


Oct 29, 2020
Christopher Fifty, Ehsan Amid, Zhe Zhao, Tianhe Yu, Rohan Anil, Chelsea Finn


  Access Paper or Ask Questions

MOPO: Model-based Offline Policy Optimization


May 27, 2020
Tianhe Yu, Garrett Thomas, Lantao Yu, Stefano Ermon, James Zou, Sergey Levine, Chelsea Finn, Tengyu Ma

* First two authors contributed equally. Last two authors advised equally 

  Access Paper or Ask Questions

Gradient Surgery for Multi-Task Learning


Jan 19, 2020
Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, Chelsea Finn


  Access Paper or Ask Questions

Meta-Inverse Reinforcement Learning with Probabilistic Context Variables


Oct 26, 2019
Lantao Yu, Tianhe Yu, Chelsea Finn, Stefano Ermon

* NeurIPS 2019 

  Access Paper or Ask Questions

Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning


Oct 24, 2019
Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Karol Hausman, Chelsea Finn, Sergey Levine

* CoRL 2019. Videos are here: meta-world.github.io and open-sourced codes are available at: https://github.com/rlworkgroup/metaworld 

  Access Paper or Ask Questions

Unsupervised Visuomotor Control through Distributional Planning Networks


Feb 14, 2019
Tianhe Yu, Gleb Shevchuk, Dorsa Sadigh, Chelsea Finn

* Videos available at https://sites.google.com/view/dpn-public/ 

  Access Paper or Ask Questions

One-Shot Hierarchical Imitation Learning of Compound Visuomotor Tasks


Oct 25, 2018
Tianhe Yu, Pieter Abbeel, Sergey Levine, Chelsea Finn

* Video results available at https://sites.google.com/view/one-shot-hil 

  Access Paper or Ask Questions

One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning


Feb 05, 2018
Tianhe Yu, Chelsea Finn, Annie Xie, Sudeep Dasari, Tianhao Zhang, Pieter Abbeel, Sergey Levine

* First two authors contributed equally. Video available at https://sites.google.com/view/daml 

  Access Paper or Ask Questions

One-Shot Visual Imitation Learning via Meta-Learning


Sep 14, 2017
Chelsea Finn, Tianhe Yu, Tianhao Zhang, Pieter Abbeel, Sergey Levine

* Conference on Robot Learning, 2017 (to appear). First two authors contributed equally. Video available at https://sites.google.com/view/one-shot-imitation 

  Access Paper or Ask Questions

Real-Time User-Guided Image Colorization with Learned Deep Priors


May 08, 2017
Richard Zhang, Jun-Yan Zhu, Phillip Isola, Xinyang Geng, Angela S. Lin, Tianhe Yu, Alexei A. Efros

* Accepted to SIGGRAPH 2017. Project page: https://richzhang.github.io/ideepcolor 

  Access Paper or Ask Questions

Generalizing Skills with Semi-Supervised Reinforcement Learning


Mar 09, 2017
Chelsea Finn, Tianhe Yu, Justin Fu, Pieter Abbeel, Sergey Levine

* ICLR 2017 

  Access Paper or Ask Questions