Alert button
Picture for Denis Yarats

Denis Yarats

Alert button

Watch and Match: Supercharging Imitation with Regularized Optimal Transport

Jun 30, 2022
Siddhant Haldar, Vaibhav Mathur, Denis Yarats, Lerrel Pinto

Figure 1 for Watch and Match: Supercharging Imitation with Regularized Optimal Transport
Figure 2 for Watch and Match: Supercharging Imitation with Regularized Optimal Transport
Figure 3 for Watch and Match: Supercharging Imitation with Regularized Optimal Transport
Figure 4 for Watch and Match: Supercharging Imitation with Regularized Optimal Transport
Viaarxiv icon

Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning

Feb 08, 2022
Denis Yarats, David Brandfonbrener, Hao Liu, Michael Laskin, Pieter Abbeel, Alessandro Lazaric, Lerrel Pinto

Figure 1 for Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning
Figure 2 for Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning
Figure 3 for Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning
Figure 4 for Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning
Viaarxiv icon

CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery

Feb 01, 2022
Michael Laskin, Hao Liu, Xue Bin Peng, Denis Yarats, Aravind Rajeswaran, Pieter Abbeel

Figure 1 for CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery
Figure 2 for CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery
Figure 3 for CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery
Figure 4 for CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery
Viaarxiv icon

URLB: Unsupervised Reinforcement Learning Benchmark

Oct 28, 2021
Michael Laskin, Denis Yarats, Hao Liu, Kimin Lee, Albert Zhan, Kevin Lu, Catherine Cang, Lerrel Pinto, Pieter Abbeel

Figure 1 for URLB: Unsupervised Reinforcement Learning Benchmark
Figure 2 for URLB: Unsupervised Reinforcement Learning Benchmark
Figure 3 for URLB: Unsupervised Reinforcement Learning Benchmark
Figure 4 for URLB: Unsupervised Reinforcement Learning Benchmark
Viaarxiv icon

A Robot Cluster for Reproducible Research in Dexterous Manipulation

Sep 22, 2021
Stefan Bauer, Felix Widmaier, Manuel Wüthrich, Niklas Funk, Julen Urain De Jesus, Jan Peters, Joe Watson, Claire Chen, Krishnan Srinivasan, Junwu Zhang, Jeffrey Zhang, Matthew R. Walter, Rishabh Madan, Charles Schaff, Takahiro Maeda, Takuma Yoneda, Denis Yarats, Arthur Allshire, Ethan K. Gordon, Tapomayukh Bhattacharjee, Siddhartha S. Srinivasa, Animesh Garg, Annika Buchholz, Sebastian Stark, Thomas Steinbrenner, Joel Akpo, Shruti Joshi, Vaibhav Agrawal, Bernhard Schölkopf

Figure 1 for A Robot Cluster for Reproducible Research in Dexterous Manipulation
Figure 2 for A Robot Cluster for Reproducible Research in Dexterous Manipulation
Figure 3 for A Robot Cluster for Reproducible Research in Dexterous Manipulation
Figure 4 for A Robot Cluster for Reproducible Research in Dexterous Manipulation
Viaarxiv icon

Mastering Visual Continuous Control: Improved Data-Augmented Reinforcement Learning

Jul 20, 2021
Denis Yarats, Rob Fergus, Alessandro Lazaric, Lerrel Pinto

Figure 1 for Mastering Visual Continuous Control: Improved Data-Augmented Reinforcement Learning
Figure 2 for Mastering Visual Continuous Control: Improved Data-Augmented Reinforcement Learning
Figure 3 for Mastering Visual Continuous Control: Improved Data-Augmented Reinforcement Learning
Figure 4 for Mastering Visual Continuous Control: Improved Data-Augmented Reinforcement Learning
Viaarxiv icon

Reinforcement Learning with Prototypical Representations

Feb 22, 2021
Denis Yarats, Rob Fergus, Alessandro Lazaric, Lerrel Pinto

Figure 1 for Reinforcement Learning with Prototypical Representations
Figure 2 for Reinforcement Learning with Prototypical Representations
Figure 3 for Reinforcement Learning with Prototypical Representations
Figure 4 for Reinforcement Learning with Prototypical Representations
Viaarxiv icon

Learning Navigation Skills for Legged Robots with Learned Robot Embeddings

Nov 24, 2020
Joanne Truong, Denis Yarats, Tianyu Li, Franziska Meier, Sonia Chernova, Dhruv Batra, Akshara Rai

Figure 1 for Learning Navigation Skills for Legged Robots with Learned Robot Embeddings
Figure 2 for Learning Navigation Skills for Legged Robots with Learned Robot Embeddings
Figure 3 for Learning Navigation Skills for Legged Robots with Learned Robot Embeddings
Figure 4 for Learning Navigation Skills for Legged Robots with Learned Robot Embeddings
Viaarxiv icon

On the model-based stochastic value gradient for continuous reinforcement learning

Aug 28, 2020
Brandon Amos, Samuel Stanton, Denis Yarats, Andrew Gordon Wilson

Figure 1 for On the model-based stochastic value gradient for continuous reinforcement learning
Figure 2 for On the model-based stochastic value gradient for continuous reinforcement learning
Figure 3 for On the model-based stochastic value gradient for continuous reinforcement learning
Figure 4 for On the model-based stochastic value gradient for continuous reinforcement learning
Viaarxiv icon

Automatic Data Augmentation for Generalization in Deep Reinforcement Learning

Jun 23, 2020
Roberta Raileanu, Max Goldstein, Denis Yarats, Ilya Kostrikov, Rob Fergus

Figure 1 for Automatic Data Augmentation for Generalization in Deep Reinforcement Learning
Figure 2 for Automatic Data Augmentation for Generalization in Deep Reinforcement Learning
Figure 3 for Automatic Data Augmentation for Generalization in Deep Reinforcement Learning
Figure 4 for Automatic Data Augmentation for Generalization in Deep Reinforcement Learning
Viaarxiv icon