Alert button
Picture for Ilya Kostrikov

Ilya Kostrikov

Alert button

RvS: What is Essential for Offline RL via Supervised Learning?

Add code
Bookmark button
Alert button
Dec 20, 2021
Scott Emmons, Benjamin Eysenbach, Ilya Kostrikov, Sergey Levine

Figure 1 for RvS: What is Essential for Offline RL via Supervised Learning?
Figure 2 for RvS: What is Essential for Offline RL via Supervised Learning?
Figure 3 for RvS: What is Essential for Offline RL via Supervised Learning?
Figure 4 for RvS: What is Essential for Offline RL via Supervised Learning?
Viaarxiv icon

Improving Zero-shot Generalization in Offline Reinforcement Learning using Generalized Similarity Functions

Add code
Bookmark button
Alert button
Nov 29, 2021
Bogdan Mazoure, Ilya Kostrikov, Ofir Nachum, Jonathan Tompson

Figure 1 for Improving Zero-shot Generalization in Offline Reinforcement Learning using Generalized Similarity Functions
Figure 2 for Improving Zero-shot Generalization in Offline Reinforcement Learning using Generalized Similarity Functions
Figure 3 for Improving Zero-shot Generalization in Offline Reinforcement Learning using Generalized Similarity Functions
Figure 4 for Improving Zero-shot Generalization in Offline Reinforcement Learning using Generalized Similarity Functions
Viaarxiv icon

Offline Reinforcement Learning with Implicit Q-Learning

Add code
Bookmark button
Alert button
Oct 12, 2021
Ilya Kostrikov, Ashvin Nair, Sergey Levine

Figure 1 for Offline Reinforcement Learning with Implicit Q-Learning
Figure 2 for Offline Reinforcement Learning with Implicit Q-Learning
Figure 3 for Offline Reinforcement Learning with Implicit Q-Learning
Figure 4 for Offline Reinforcement Learning with Implicit Q-Learning
Viaarxiv icon

Offline Reinforcement Learning with Fisher Divergence Critic Regularization

Add code
Bookmark button
Alert button
Mar 14, 2021
Ilya Kostrikov, Jonathan Tompson, Rob Fergus, Ofir Nachum

Figure 1 for Offline Reinforcement Learning with Fisher Divergence Critic Regularization
Figure 2 for Offline Reinforcement Learning with Fisher Divergence Critic Regularization
Figure 3 for Offline Reinforcement Learning with Fisher Divergence Critic Regularization
Figure 4 for Offline Reinforcement Learning with Fisher Divergence Critic Regularization
Viaarxiv icon

Statistical Bootstrapping for Uncertainty Estimation in Off-Policy Evaluation

Add code
Bookmark button
Alert button
Jul 27, 2020
Ilya Kostrikov, Ofir Nachum

Figure 1 for Statistical Bootstrapping for Uncertainty Estimation in Off-Policy Evaluation
Figure 2 for Statistical Bootstrapping for Uncertainty Estimation in Off-Policy Evaluation
Figure 3 for Statistical Bootstrapping for Uncertainty Estimation in Off-Policy Evaluation
Viaarxiv icon

Automatic Data Augmentation for Generalization in Deep Reinforcement Learning

Add code
Bookmark button
Alert button
Jun 23, 2020
Roberta Raileanu, Max Goldstein, Denis Yarats, Ilya Kostrikov, Rob Fergus

Figure 1 for Automatic Data Augmentation for Generalization in Deep Reinforcement Learning
Figure 2 for Automatic Data Augmentation for Generalization in Deep Reinforcement Learning
Figure 3 for Automatic Data Augmentation for Generalization in Deep Reinforcement Learning
Figure 4 for Automatic Data Augmentation for Generalization in Deep Reinforcement Learning
Viaarxiv icon

Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels

Add code
Bookmark button
Alert button
Apr 28, 2020
Ilya Kostrikov, Denis Yarats, Rob Fergus

Figure 1 for Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels
Figure 2 for Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels
Figure 3 for Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels
Figure 4 for Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels
Viaarxiv icon

Imitation Learning via Off-Policy Distribution Matching

Add code
Bookmark button
Alert button
Dec 10, 2019
Ilya Kostrikov, Ofir Nachum, Jonathan Tompson

Figure 1 for Imitation Learning via Off-Policy Distribution Matching
Figure 2 for Imitation Learning via Off-Policy Distribution Matching
Figure 3 for Imitation Learning via Off-Policy Distribution Matching
Figure 4 for Imitation Learning via Off-Policy Distribution Matching
Viaarxiv icon

AlgaeDICE: Policy Gradient from Arbitrary Experience

Add code
Bookmark button
Alert button
Dec 04, 2019
Ofir Nachum, Bo Dai, Ilya Kostrikov, Yinlam Chow, Lihong Li, Dale Schuurmans

Figure 1 for AlgaeDICE: Policy Gradient from Arbitrary Experience
Figure 2 for AlgaeDICE: Policy Gradient from Arbitrary Experience
Figure 3 for AlgaeDICE: Policy Gradient from Arbitrary Experience
Figure 4 for AlgaeDICE: Policy Gradient from Arbitrary Experience
Viaarxiv icon

Improving Sample Efficiency in Model-Free Reinforcement Learning from Images

Add code
Bookmark button
Alert button
Oct 07, 2019
Denis Yarats, Amy Zhang, Ilya Kostrikov, Brandon Amos, Joelle Pineau, Rob Fergus

Figure 1 for Improving Sample Efficiency in Model-Free Reinforcement Learning from Images
Figure 2 for Improving Sample Efficiency in Model-Free Reinforcement Learning from Images
Figure 3 for Improving Sample Efficiency in Model-Free Reinforcement Learning from Images
Figure 4 for Improving Sample Efficiency in Model-Free Reinforcement Learning from Images
Viaarxiv icon