Alert button
Picture for Aravind Rajeswaran

Aravind Rajeswaran

Alert button

RoboHive: A Unified Framework for Robot Learning

Oct 10, 2023
Vikash Kumar, Rutav Shah, Gaoyue Zhou, Vincent Moens, Vittorio Caggiano, Jay Vakil, Abhishek Gupta, Aravind Rajeswaran

Figure 1 for RoboHive: A Unified Framework for Robot Learning
Figure 2 for RoboHive: A Unified Framework for Robot Learning
Figure 3 for RoboHive: A Unified Framework for Robot Learning
Figure 4 for RoboHive: A Unified Framework for Robot Learning

We present RoboHive, a comprehensive software platform and ecosystem for research in the field of Robot Learning and Embodied Artificial Intelligence. Our platform encompasses a diverse range of pre-existing and novel environments, including dexterous manipulation with the Shadow Hand, whole-arm manipulation tasks with Franka and Fetch robots, quadruped locomotion, among others. Included environments are organized within and cover multiple domains such as hand manipulation, locomotion, multi-task, multi-agent, muscles, etc. In comparison to prior works, RoboHive offers a streamlined and unified task interface taking dependency on only a minimal set of well-maintained packages, features tasks with high physics fidelity and rich visual diversity, and supports common hardware drivers for real-world deployment. The unified interface of RoboHive offers a convenient and accessible abstraction for algorithmic research in imitation, reinforcement, multi-task, and hierarchical learning. Furthermore, RoboHive includes expert demonstrations and baseline results for most environments, providing a standard for benchmarking and comparisons. Details: https://sites.google.com/view/robohive

* Accepted at 37th Conference on Neural Information Processing Systems (NeurIPS 2023) Track on Datasets and Benchmarks 
Viaarxiv icon

What do we learn from a large-scale study of pre-trained visual representations in sim and real environments?

Oct 03, 2023
Sneha Silwal, Karmesh Yadav, Tingfan Wu, Jay Vakil, Arjun Majumdar, Sergio Arnaud, Claire Chen, Vincent-Pierre Berges, Dhruv Batra, Aravind Rajeswaran, Mrinal Kalakrishnan, Franziska Meier, Oleksandr Maksymets

Figure 1 for What do we learn from a large-scale study of pre-trained visual representations in sim and real environments?
Figure 2 for What do we learn from a large-scale study of pre-trained visual representations in sim and real environments?
Figure 3 for What do we learn from a large-scale study of pre-trained visual representations in sim and real environments?
Figure 4 for What do we learn from a large-scale study of pre-trained visual representations in sim and real environments?

We present a large empirical investigation on the use of pre-trained visual representations (PVRs) for training downstream policies that execute real-world tasks. Our study spans five different PVRs, two different policy-learning paradigms (imitation and reinforcement learning), and three different robots for 5 distinct manipulation and indoor navigation tasks. From this effort, we can arrive at three insights: 1) the performance trends of PVRs in the simulation are generally indicative of their trends in the real world, 2) the use of PVRs enables a first-of-its-kind result with indoor ImageNav (zero-shot transfer to a held-out scene in the real world), and 3) the benefits from variations in PVRs, primarily data-augmentation and fine-tuning, also transfer to the real-world performance. See project website for additional details and visuals.

* Project website https://pvrs-sim2real.github.io/ 
Viaarxiv icon

MoDem-V2: Visuo-Motor World Models for Real-World Robot Manipulation

Sep 25, 2023
Patrick Lancaster, Nicklas Hansen, Aravind Rajeswaran, Vikash Kumar

Figure 1 for MoDem-V2: Visuo-Motor World Models for Real-World Robot Manipulation
Figure 2 for MoDem-V2: Visuo-Motor World Models for Real-World Robot Manipulation
Figure 3 for MoDem-V2: Visuo-Motor World Models for Real-World Robot Manipulation
Figure 4 for MoDem-V2: Visuo-Motor World Models for Real-World Robot Manipulation

Robotic systems that aspire to operate in uninstrumented real-world environments must perceive the world directly via onboard sensing. Vision-based learning systems aim to eliminate the need for environment instrumentation by building an implicit understanding of the world based on raw pixels, but navigating the contact-rich high-dimensional search space from solely sparse visual reward signals significantly exacerbates the challenge of exploration. The applicability of such systems is thus typically restricted to simulated or heavily engineered environments since agent exploration in the real-world without the guidance of explicit state estimation and dense rewards can lead to unsafe behavior and safety faults that are catastrophic. In this study, we isolate the root causes behind these limitations to develop a system, called MoDem-V2, capable of learning contact-rich manipulation directly in the uninstrumented real world. Building on the latest algorithmic advancements in model-based reinforcement learning (MBRL), demo-bootstrapping, and effective exploration, MoDem-V2 can acquire contact-rich dexterous manipulation skills directly in the real world. We identify key ingredients for leveraging demonstrations in model learning while respecting real-world safety considerations -- exploration centering, agency handover, and actor-critic ensembles. We empirically demonstrate the contribution of these ingredients in four complex visuo-motor manipulation problems in both simulation and the real world. To the best of our knowledge, our work presents the first successful system for demonstration-augmented visual MBRL trained directly in the real world. Visit https://sites.google.com/view/modem-v2 for videos and more details.

* 9 pages, 8 figures 
Viaarxiv icon

Train Offline, Test Online: A Real Robot Learning Benchmark

Jun 01, 2023
Gaoyue Zhou, Victoria Dean, Mohan Kumar Srirama, Aravind Rajeswaran, Jyothish Pari, Kyle Hatch, Aryan Jain, Tianhe Yu, Pieter Abbeel, Lerrel Pinto, Chelsea Finn, Abhinav Gupta

Figure 1 for Train Offline, Test Online: A Real Robot Learning Benchmark
Figure 2 for Train Offline, Test Online: A Real Robot Learning Benchmark
Figure 3 for Train Offline, Test Online: A Real Robot Learning Benchmark
Figure 4 for Train Offline, Test Online: A Real Robot Learning Benchmark

Three challenges limit the progress of robot learning research: robots are expensive (few labs can participate), everyone uses different robots (findings do not generalize across labs), and we lack internet-scale robotics data. We take on these challenges via a new benchmark: Train Offline, Test Online (TOTO). TOTO provides remote users with access to shared robotic hardware for evaluating methods on common tasks and an open-source dataset of these tasks for offline training. Its manipulation task suite requires challenging generalization to unseen objects, positions, and lighting. We present initial results on TOTO comparing five pretrained visual representations and four offline policy learning baselines, remotely contributed by five institutions. The real promise of TOTO, however, lies in the future: we release the benchmark for additional submissions from any user, enabling easy, direct comparison to several methods without the need to obtain hardware or collect data.

* Accepted to ICRA 2023 
Viaarxiv icon

Masked Trajectory Models for Prediction, Representation, and Control

May 04, 2023
Philipp Wu, Arjun Majumdar, Kevin Stone, Yixin Lin, Igor Mordatch, Pieter Abbeel, Aravind Rajeswaran

Figure 1 for Masked Trajectory Models for Prediction, Representation, and Control
Figure 2 for Masked Trajectory Models for Prediction, Representation, and Control
Figure 3 for Masked Trajectory Models for Prediction, Representation, and Control
Figure 4 for Masked Trajectory Models for Prediction, Representation, and Control

We introduce Masked Trajectory Models (MTM) as a generic abstraction for sequential decision making. MTM takes a trajectory, such as a state-action sequence, and aims to reconstruct the trajectory conditioned on random subsets of the same trajectory. By training with a highly randomized masking pattern, MTM learns versatile networks that can take on different roles or capabilities, by simply choosing appropriate masks at inference time. For example, the same MTM network can be used as a forward dynamics model, inverse dynamics model, or even an offline RL agent. Through extensive experiments in several continuous control tasks, we show that the same MTM network -- i.e. same weights -- can match or outperform specialized networks trained for the aforementioned capabilities. Additionally, we find that state representations learned by MTM can significantly accelerate the learning speed of traditional RL algorithms. Finally, in offline RL benchmarks, we find that MTM is competitive with specialized offline RL algorithms, despite MTM being a generic self-supervised learning method without any explicit RL components. Code is available at https://github.com/facebookresearch/mtm

* Accepted for publication at ICML 2023. Project webpage: https://wuphilipp.github.io/mtm/ 
Viaarxiv icon

Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?

Mar 31, 2023
Arjun Majumdar, Karmesh Yadav, Sergio Arnaud, Yecheng Jason Ma, Claire Chen, Sneha Silwal, Aryan Jain, Vincent-Pierre Berges, Pieter Abbeel, Jitendra Malik, Dhruv Batra, Yixin Lin, Oleksandr Maksymets, Aravind Rajeswaran, Franziska Meier

Figure 1 for Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?
Figure 2 for Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?
Figure 3 for Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?
Figure 4 for Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?

We present the largest and most comprehensive empirical study of pre-trained visual representations (PVRs) or visual 'foundation models' for Embodied AI. First, we curate CortexBench, consisting of 17 different tasks spanning locomotion, navigation, dexterous, and mobile manipulation. Next, we systematically evaluate existing PVRs and find that none are universally dominant. To study the effect of pre-training data scale and diversity, we combine over 4,000 hours of egocentric videos from 7 different sources (over 5.6M images) and ImageNet to train different-sized vision transformers using Masked Auto-Encoding (MAE) on slices of this data. Contrary to inferences from prior work, we find that scaling dataset size and diversity does not improve performance universally (but does so on average). Our largest model, named VC-1, outperforms all prior PVRs on average but does not universally dominate either. Finally, we show that task or domain-specific adaptation of VC-1 leads to substantial gains, with VC-1 (adapted) achieving competitive or superior performance than the best known results on all of the benchmarks in CortexBench. These models required over 10,000 GPU-hours to train and can be found on our website for the benefit of the research community.

* Project website: https://eai-vc.github.io 
Viaarxiv icon

On Pre-Training for Visuo-Motor Control: Revisiting a Learning-from-Scratch Baseline

Dec 12, 2022
Nicklas Hansen, Zhecheng Yuan, Yanjie Ze, Tongzhou Mu, Aravind Rajeswaran, Hao Su, Huazhe Xu, Xiaolong Wang

Figure 1 for On Pre-Training for Visuo-Motor Control: Revisiting a Learning-from-Scratch Baseline
Figure 2 for On Pre-Training for Visuo-Motor Control: Revisiting a Learning-from-Scratch Baseline
Figure 3 for On Pre-Training for Visuo-Motor Control: Revisiting a Learning-from-Scratch Baseline
Figure 4 for On Pre-Training for Visuo-Motor Control: Revisiting a Learning-from-Scratch Baseline

We revisit a simple Learning-from-Scratch baseline for visuo-motor control that uses data augmentation and a shallow ConvNet. We find that this baseline has competitive performance with recent methods that leverage frozen visual representations trained on large-scale vision datasets.

* to pre-train; not to pre-train 
Viaarxiv icon

CACTI: A Framework for Scalable Multi-Task Multi-Scene Visual Imitation Learning

Dec 12, 2022
Zhao Mandi, Homanga Bharadhwaj, Vincent Moens, Shuran Song, Aravind Rajeswaran, Vikash Kumar

Figure 1 for CACTI: A Framework for Scalable Multi-Task Multi-Scene Visual Imitation Learning
Figure 2 for CACTI: A Framework for Scalable Multi-Task Multi-Scene Visual Imitation Learning
Figure 3 for CACTI: A Framework for Scalable Multi-Task Multi-Scene Visual Imitation Learning
Figure 4 for CACTI: A Framework for Scalable Multi-Task Multi-Scene Visual Imitation Learning

Developing robots that are capable of many skills and generalization to unseen scenarios requires progress on two fronts: efficient collection of large and diverse datasets, and training of high-capacity policies on the collected data. While large datasets have propelled progress in other fields like computer vision and natural language processing, collecting data of comparable scale is particularly challenging for physical systems like robotics. In this work, we propose a framework to bridge this gap and better scale up robot learning, under the lens of multi-task, multi-scene robot manipulation in kitchen environments. Our framework, named CACTI, has four stages that separately handle data collection, data augmentation, visual representation learning, and imitation policy training. In the CACTI framework, we highlight the benefit of adapting state-of-the-art models for image generation as part of the augmentation stage, and the significant improvement of training efficiency by using pretrained out-of-domain visual representations at the compression stage. Experimentally, we demonstrate that 1) on a real robot setup, CACTI enables efficient training of a single policy capable of 10 manipulation tasks involving kitchen objects, and robust to varying layouts of distractor objects; 2) in a simulated kitchen environment, CACTI trains a single policy on 18 semantic tasks across up to 50 layout variations per task. The simulation task benchmark and augmented datasets in both real and simulated environments will be released to facilitate future research.

Viaarxiv icon

MoDem: Accelerating Visual Model-Based Reinforcement Learning with Demonstrations

Dec 12, 2022
Nicklas Hansen, Yixin Lin, Hao Su, Xiaolong Wang, Vikash Kumar, Aravind Rajeswaran

Figure 1 for MoDem: Accelerating Visual Model-Based Reinforcement Learning with Demonstrations
Figure 2 for MoDem: Accelerating Visual Model-Based Reinforcement Learning with Demonstrations
Figure 3 for MoDem: Accelerating Visual Model-Based Reinforcement Learning with Demonstrations
Figure 4 for MoDem: Accelerating Visual Model-Based Reinforcement Learning with Demonstrations

Poor sample efficiency continues to be the primary challenge for deployment of deep Reinforcement Learning (RL) algorithms for real-world applications, and in particular for visuo-motor control. Model-based RL has the potential to be highly sample efficient by concurrently learning a world model and using synthetic rollouts for planning and policy improvement. However, in practice, sample-efficient learning with model-based RL is bottlenecked by the exploration challenge. In this work, we find that leveraging just a handful of demonstrations can dramatically improve the sample-efficiency of model-based RL. Simply appending demonstrations to the interaction dataset, however, does not suffice. We identify key ingredients for leveraging demonstrations in model learning -- policy pretraining, targeted exploration, and oversampling of demonstration data -- which forms the three phases of our model-based RL framework. We empirically study three complex visuo-motor control domains and find that our method is 150%-250% more successful in completing sparse reward tasks compared to prior approaches in the low data regime (100K interaction steps, 5 demonstrations). Code and videos are available at: https://nicklashansen.github.io/modemrl

Viaarxiv icon

Real World Offline Reinforcement Learning with Realistic Data Source

Oct 12, 2022
Gaoyue Zhou, Liyiming Ke, Siddhartha Srinivasa, Abhinav Gupta, Aravind Rajeswaran, Vikash Kumar

Figure 1 for Real World Offline Reinforcement Learning with Realistic Data Source
Figure 2 for Real World Offline Reinforcement Learning with Realistic Data Source
Figure 3 for Real World Offline Reinforcement Learning with Realistic Data Source
Figure 4 for Real World Offline Reinforcement Learning with Realistic Data Source

Offline reinforcement learning (ORL) holds great promise for robot learning due to its ability to learn from arbitrary pre-generated experience. However, current ORL benchmarks are almost entirely in simulation and utilize contrived datasets like replay buffers of online RL agents or sub-optimal trajectories, and thus hold limited relevance for real-world robotics. In this work (Real-ORL), we posit that data collected from safe operations of closely related tasks are more practical data sources for real-world robot learning. Under these settings, we perform an extensive (6500+ trajectories collected over 800+ robot hours and 270+ human labor hour) empirical study evaluating generalization and transfer capabilities of representative ORL methods on four real-world tabletop manipulation tasks. Our study finds that ORL and imitation learning prefer different action spaces, and that ORL algorithms can generalize from leveraging offline heterogeneous data sources and outperform imitation learning. We release our dataset and implementations at URL: https://sites.google.com/view/real-orl

* Project website: https://sites.google.com/view/real-orl 
Viaarxiv icon