Picture for Robert Dadashi

Robert Dadashi

Get Back Here: Robust Imitation by Return-to-Distribution Planning

Add code
May 02, 2023
Viaarxiv icon

Learning Energy Networks with Generalized Fenchel-Young Losses

Add code
May 19, 2022
Figure 1 for Learning Energy Networks with Generalized Fenchel-Young Losses
Figure 2 for Learning Energy Networks with Generalized Fenchel-Young Losses
Figure 3 for Learning Energy Networks with Generalized Fenchel-Young Losses
Figure 4 for Learning Energy Networks with Generalized Fenchel-Young Losses
Viaarxiv icon

Continuous Control with Action Quantization from Demonstrations

Add code
Oct 19, 2021
Figure 1 for Continuous Control with Action Quantization from Demonstrations
Figure 2 for Continuous Control with Action Quantization from Demonstrations
Figure 3 for Continuous Control with Action Quantization from Demonstrations
Figure 4 for Continuous Control with Action Quantization from Demonstrations
Viaarxiv icon

Offline Reinforcement Learning as Anti-Exploration

Add code
Jun 11, 2021
Figure 1 for Offline Reinforcement Learning as Anti-Exploration
Figure 2 for Offline Reinforcement Learning as Anti-Exploration
Figure 3 for Offline Reinforcement Learning as Anti-Exploration
Figure 4 for Offline Reinforcement Learning as Anti-Exploration
Viaarxiv icon

What Matters for Adversarial Imitation Learning?

Add code
Jun 01, 2021
Figure 1 for What Matters for Adversarial Imitation Learning?
Figure 2 for What Matters for Adversarial Imitation Learning?
Figure 3 for What Matters for Adversarial Imitation Learning?
Figure 4 for What Matters for Adversarial Imitation Learning?
Viaarxiv icon

Hyperparameter Selection for Imitation Learning

Add code
May 25, 2021
Figure 1 for Hyperparameter Selection for Imitation Learning
Figure 2 for Hyperparameter Selection for Imitation Learning
Figure 3 for Hyperparameter Selection for Imitation Learning
Figure 4 for Hyperparameter Selection for Imitation Learning
Viaarxiv icon

Offline Reinforcement Learning with Pseudometric Learning

Add code
Mar 02, 2021
Figure 1 for Offline Reinforcement Learning with Pseudometric Learning
Figure 2 for Offline Reinforcement Learning with Pseudometric Learning
Figure 3 for Offline Reinforcement Learning with Pseudometric Learning
Figure 4 for Offline Reinforcement Learning with Pseudometric Learning
Viaarxiv icon

Show me the Way: Intrinsic Motivation from Demonstrations

Add code
Jun 23, 2020
Figure 1 for Show me the Way: Intrinsic Motivation from Demonstrations
Figure 2 for Show me the Way: Intrinsic Motivation from Demonstrations
Figure 3 for Show me the Way: Intrinsic Motivation from Demonstrations
Figure 4 for Show me the Way: Intrinsic Motivation from Demonstrations
Viaarxiv icon

Primal Wasserstein Imitation Learning

Add code
Jun 08, 2020
Figure 1 for Primal Wasserstein Imitation Learning
Figure 2 for Primal Wasserstein Imitation Learning
Figure 3 for Primal Wasserstein Imitation Learning
Figure 4 for Primal Wasserstein Imitation Learning
Viaarxiv icon

The Value-Improvement Path: Towards Better Representations for Reinforcement Learning

Add code
Jun 03, 2020
Figure 1 for The Value-Improvement Path: Towards Better Representations for Reinforcement Learning
Figure 2 for The Value-Improvement Path: Towards Better Representations for Reinforcement Learning
Figure 3 for The Value-Improvement Path: Towards Better Representations for Reinforcement Learning
Figure 4 for The Value-Improvement Path: Towards Better Representations for Reinforcement Learning
Viaarxiv icon