Learning State Abstractions for Transfer in Continuous Control

Feb 08, 2020
Kavosh Asadi, David Abel, Michael L. Littman


  Access Model/Code and Paper
Deep RBF Value Functions for Continuous Control

Feb 05, 2020
Kavosh Asadi, Ronald E. Parr, George D. Konidaris, Michael L. Littman


  Access Model/Code and Paper
Lipschitz Lifelong Reinforcement Learning

Jan 17, 2020
Erwan Lecarpentier, David Abel, Kavosh Asadi, Yuu Jinnai, Emmanuel Rachelson, Michael L. Littman

* Submitted to ICML 2020, 21 pages, 15 figures 

  Access Model/Code and Paper
Combating the Compounding-Error Problem with a Multi-step Model

May 30, 2019
Kavosh Asadi, Dipendra Misra, Seungchan Kim, Michel L. Littman


  Access Model/Code and Paper
Mitigating Planner Overfitting in Model-Based Reinforcement Learning

Dec 03, 2018
Dilip Arumugam, David Abel, Kavosh Asadi, Nakul Gopalan, Christopher Grimm, Jun Ki Lee, Lucas Lehnert, Michael L. Littman


  Access Model/Code and Paper
Towards a Simple Approach to Multi-step Model-based Reinforcement Learning

Oct 31, 2018
Kavosh Asadi, Evan Cater, Dipendra Misra, Michael L. Littman


  Access Model/Code and Paper
Lipschitz Continuity in Model-based Reinforcement Learning

Jul 27, 2018
Kavosh Asadi, Dipendra Misra, Michael L. Littman

* Accepted for the 35th International Conference on Machine Learning (ICML 2018) 

  Access Model/Code and Paper
Equivalence Between Wasserstein and Value-Aware Loss for Model-based Reinforcement Learning

Jul 08, 2018
Kavosh Asadi, Evan Cater, Dipendra Misra, Michael L. Littman

* Accepted at the FAIM workshop "Prediction and Generative Modeling in Reinforcement Learning", Stockholm, Sweden, 2018 

  Access Model/Code and Paper
An Alternative Softmax Operator for Reinforcement Learning

Jun 14, 2017
Kavosh Asadi, Michael L. Littman


  Access Model/Code and Paper
Sample-efficient Deep Reinforcement Learning for Dialog Control

Dec 18, 2016
Kavosh Asadi, Jason D. Williams


  Access Model/Code and Paper