Picture for Andrey Kolobov

Andrey Kolobov

PLEX: Making the Most of the Available Data for Robotic Manipulation Pretraining

Add code
Mar 15, 2023
Viaarxiv icon

Exploring Levels of Control for a Navigation Assistant for Blind Travelers

Add code
Jan 05, 2023
Figure 1 for Exploring Levels of Control for a Navigation Assistant for Blind Travelers
Figure 2 for Exploring Levels of Control for a Navigation Assistant for Blind Travelers
Figure 3 for Exploring Levels of Control for a Navigation Assistant for Blind Travelers
Figure 4 for Exploring Levels of Control for a Navigation Assistant for Blind Travelers
Viaarxiv icon

MoCapAct: A Multi-Task Dataset for Simulated Humanoid Control

Add code
Aug 15, 2022
Figure 1 for MoCapAct: A Multi-Task Dataset for Simulated Humanoid Control
Figure 2 for MoCapAct: A Multi-Task Dataset for Simulated Humanoid Control
Figure 3 for MoCapAct: A Multi-Task Dataset for Simulated Humanoid Control
Figure 4 for MoCapAct: A Multi-Task Dataset for Simulated Humanoid Control
Viaarxiv icon

The Sandbox Environment for Generalizable Agent Research (SEGAR)

Add code
Mar 19, 2022
Figure 1 for The Sandbox Environment for Generalizable Agent Research (SEGAR)
Figure 2 for The Sandbox Environment for Generalizable Agent Research (SEGAR)
Figure 3 for The Sandbox Environment for Generalizable Agent Research (SEGAR)
Figure 4 for The Sandbox Environment for Generalizable Agent Research (SEGAR)
Viaarxiv icon

Heuristic-Guided Reinforcement Learning

Add code
Jun 05, 2021
Figure 1 for Heuristic-Guided Reinforcement Learning
Figure 2 for Heuristic-Guided Reinforcement Learning
Figure 3 for Heuristic-Guided Reinforcement Learning
Figure 4 for Heuristic-Guided Reinforcement Learning
Viaarxiv icon

Cross-Trajectory Representation Learning for Zero-Shot Generalization in RL

Add code
Jun 04, 2021
Figure 1 for Cross-Trajectory Representation Learning for Zero-Shot Generalization in RL
Figure 2 for Cross-Trajectory Representation Learning for Zero-Shot Generalization in RL
Figure 3 for Cross-Trajectory Representation Learning for Zero-Shot Generalization in RL
Figure 4 for Cross-Trajectory Representation Learning for Zero-Shot Generalization in RL
Viaarxiv icon

Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark

Add code
Mar 29, 2021
Figure 1 for Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark
Figure 2 for Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark
Figure 3 for Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark
Figure 4 for Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark
Viaarxiv icon

Policy Improvement from Multiple Experts

Add code
Jul 01, 2020
Figure 1 for Policy Improvement from Multiple Experts
Figure 2 for Policy Improvement from Multiple Experts
Figure 3 for Policy Improvement from Multiple Experts
Figure 4 for Policy Improvement from Multiple Experts
Viaarxiv icon

Safe Reinforcement Learning via Curriculum Induction

Add code
Jun 22, 2020
Figure 1 for Safe Reinforcement Learning via Curriculum Induction
Figure 2 for Safe Reinforcement Learning via Curriculum Induction
Figure 3 for Safe Reinforcement Learning via Curriculum Induction
Figure 4 for Safe Reinforcement Learning via Curriculum Induction
Viaarxiv icon

Online Learning for Active Cache Synchronization

Add code
Feb 27, 2020
Figure 1 for Online Learning for Active Cache Synchronization
Figure 2 for Online Learning for Active Cache Synchronization
Figure 3 for Online Learning for Active Cache Synchronization
Viaarxiv icon