Alert button
Picture for Vivek Myers

Vivek Myers

Alert button

BridgeData V2: A Dataset for Robot Learning at Scale

Aug 24, 2023
Homer Walke, Kevin Black, Abraham Lee, Moo Jin Kim, Max Du, Chongyi Zheng, Tony Zhao, Philippe Hansen-Estruch, Quan Vuong, Andre He, Vivek Myers, Kuan Fang, Chelsea Finn, Sergey Levine

Figure 1 for BridgeData V2: A Dataset for Robot Learning at Scale
Figure 2 for BridgeData V2: A Dataset for Robot Learning at Scale
Figure 3 for BridgeData V2: A Dataset for Robot Learning at Scale
Figure 4 for BridgeData V2: A Dataset for Robot Learning at Scale

We introduce BridgeData V2, a large and diverse dataset of robotic manipulation behaviors designed to facilitate research on scalable robot learning. BridgeData V2 contains 60,096 trajectories collected across 24 environments on a publicly available low-cost robot. BridgeData V2 provides extensive task and environment variability, leading to skills that can generalize across environments, domains, and institutions, making the dataset a useful resource for a broad range of researchers. Additionally, the dataset is compatible with a wide variety of open-vocabulary, multi-task learning methods conditioned on goal images or natural language instructions. In our experiments, we train 6 state-of-the-art imitation learning and offline reinforcement learning methods on our dataset, and find that they succeed on a suite of tasks requiring varying amounts of generalization. We also demonstrate that the performance of these methods improves with more data and higher capacity models, and that training on a greater variety of skills leads to improved generalization. By publicly sharing BridgeData V2 and our pre-trained models, we aim to accelerate research in scalable robot learning methods. Project page at https://rail-berkeley.github.io/bridgedata

* 9 pages 
Viaarxiv icon

Goal Representations for Instruction Following: A Semi-Supervised Language Interface to Control

Jun 30, 2023
Vivek Myers, Andre He, Kuan Fang, Homer Walke, Philippe Hansen-Estruch, Ching-An Cheng, Mihai Jalobeanu, Andrey Kolobov, Anca Dragan, Sergey Levine

Figure 1 for Goal Representations for Instruction Following: A Semi-Supervised Language Interface to Control
Figure 2 for Goal Representations for Instruction Following: A Semi-Supervised Language Interface to Control
Figure 3 for Goal Representations for Instruction Following: A Semi-Supervised Language Interface to Control
Figure 4 for Goal Representations for Instruction Following: A Semi-Supervised Language Interface to Control

Our goal is for robots to follow natural language instructions like "put the towel next to the microwave." But getting large amounts of labeled data, i.e. data that contains demonstrations of tasks labeled with the language instruction, is prohibitive. In contrast, obtaining policies that respond to image goals is much easier, because any autonomous trial or demonstration can be labeled in hindsight with its final state as the goal. In this work, we contribute a method that taps into joint image- and goal- conditioned policies with language using only a small amount of language data. Prior work has made progress on this using vision-language models or by jointly training language-goal-conditioned policies, but so far neither method has scaled effectively to real-world robot tasks without significant human annotation. Our method achieves robust performance in the real world by learning an embedding from the labeled data that aligns language not to the goal image, but rather to the desired change between the start and goal images that the instruction corresponds to. We then train a policy on this embedding: the policy benefits from all the unlabeled data, but the aligned embedding provides an interface for language to steer the policy. We show instruction following across a variety of manipulation tasks in different scenes, with generalization to language instructions outside of the labeled data. Videos and code for our approach can be found on our website: http://tiny.cc/grif .

* 15 pages, 5 figures 
Viaarxiv icon

Toward Grounded Social Reasoning

Jun 14, 2023
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh

Figure 1 for Toward Grounded Social Reasoning
Figure 2 for Toward Grounded Social Reasoning
Figure 3 for Toward Grounded Social Reasoning
Figure 4 for Toward Grounded Social Reasoning

Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning.

Viaarxiv icon

Active Reward Learning from Online Preferences

Feb 27, 2023
Vivek Myers, Erdem Bıyık, Dorsa Sadigh

Figure 1 for Active Reward Learning from Online Preferences
Figure 2 for Active Reward Learning from Online Preferences
Figure 3 for Active Reward Learning from Online Preferences
Figure 4 for Active Reward Learning from Online Preferences

Robot policies need to adapt to human preferences and/or new environments. Human experts may have the domain knowledge required to help robots achieve this adaptation. However, existing works often require costly offline re-training on human feedback, and those feedback usually need to be frequent and too complex for the humans to reliably provide. To avoid placing undue burden on human experts and allow quick adaptation in critical real-world situations, we propose designing and sparingly presenting easy-to-answer pairwise action preference queries in an online fashion. Our approach designs queries and determines when to present them to maximize the expected value derived from the queries' information. We demonstrate our approach with experiments in simulation, human user studies, and real robot experiments. In these settings, our approach outperforms baseline techniques while presenting fewer queries to human experts. Experiment videos, code and appendices are found at https://sites.google.com/view/onlineactivepreferences.

* 11 pages, 8 figures, 1 table. Published in the 2023 IEEE International Conference on Robotics and Automation (ICRA) 
Viaarxiv icon

Bayesian Meta-Learning Through Variational Gaussian Processes

Oct 21, 2021
Vivek Myers, Nikhil Sardana

Figure 1 for Bayesian Meta-Learning Through Variational Gaussian Processes
Figure 2 for Bayesian Meta-Learning Through Variational Gaussian Processes
Figure 3 for Bayesian Meta-Learning Through Variational Gaussian Processes
Figure 4 for Bayesian Meta-Learning Through Variational Gaussian Processes

Recent advances in the field of meta-learning have tackled domains consisting of large numbers of small ("few-shot") supervised learning tasks. Meta-learning algorithms must be able to rapidly adapt to any individual few-shot task, fitting to a small support set within a task and using it to predict the labels of the task's query set. This problem setting can be extended to the Bayesian context, wherein rather than predicting a single label for each query data point, a model predicts a distribution of labels capturing its uncertainty. Successful methods in this domain include Bayesian ensembling of MAML-based models, Bayesian neural networks, and Gaussian processes with learned deep kernel and mean functions. While Gaussian processes have a robust Bayesian interpretation in the meta-learning context, they do not naturally model non-Gaussian predictive posteriors for expressing uncertainty. In this paper, we design a theoretically principled method, VMGP, extending Gaussian-process-based meta-learning to allow for high-quality, arbitrary non-Gaussian uncertainty predictions. On benchmark environments with complex non-smooth or discontinuous structure, we find our VMGP method performs significantly better than existing Bayesian meta-learning baselines.

Viaarxiv icon

Learning Multimodal Rewards from Rankings

Oct 18, 2021
Vivek Myers, Erdem Bıyık, Nima Anari, Dorsa Sadigh

Figure 1 for Learning Multimodal Rewards from Rankings
Figure 2 for Learning Multimodal Rewards from Rankings
Figure 3 for Learning Multimodal Rewards from Rankings

Learning from human feedback has shown to be a useful approach in acquiring robot reward functions. However, expert feedback is often assumed to be drawn from an underlying unimodal reward function. This assumption does not always hold including in settings where multiple experts provide data or when a single expert provides data for different tasks -- we thus go beyond learning a unimodal reward and focus on learning a multimodal reward function. We formulate the multimodal reward learning as a mixture learning problem and develop a novel ranking-based learning approach, where the experts are only required to rank a given set of trajectories. Furthermore, as access to interaction data is often expensive in robotics, we develop an active querying approach to accelerate the learning process. We conduct experiments and user studies using a multi-task variant of OpenAI's LunarLander and a real Fetch robot, where we collect data from multiple users with different preferences. The results suggest that our approach can efficiently learn multimodal reward functions, and improve data-efficiency over benchmark methods that we adapt to our learning problem.

* 17 pages, 12 figures, 2 tables. Published at Conference on Robot Learning (CoRL) 2021 
Viaarxiv icon

A Hierarchical Approach to Scaling Batch Active Search Over Structured Data

Jul 20, 2020
Vivek Myers, Peyton Greenside

Figure 1 for A Hierarchical Approach to Scaling Batch Active Search Over Structured Data
Figure 2 for A Hierarchical Approach to Scaling Batch Active Search Over Structured Data
Figure 3 for A Hierarchical Approach to Scaling Batch Active Search Over Structured Data
Figure 4 for A Hierarchical Approach to Scaling Batch Active Search Over Structured Data

Active search is the process of identifying high-value data points in a large and often high-dimensional parameter space that can be expensive to evaluate. Traditional active search techniques like Bayesian optimization trade off exploration and exploitation over consecutive evaluations, and have historically focused on single or small (<5) numbers of examples evaluated per round. As modern data sets grow, so does the need to scale active search to large data sets and batch sizes. In this paper, we present a general hierarchical framework based on bandit algorithms to scale active search to large batch sizes by maximizing information derived from the unique structure of each dataset. Our hierarchical framework, Hierarchical Batch Bandit Search (HBBS), strategically distributes batch selection across a learned embedding space by facilitating wide exploration of different structural elements within a dataset. We focus our application of HBBS on modern biology, where large batch experimentation is often fundamental to the research process, and demonstrate batch design of biological sequences (protein and DNA). We also present a new Gym environment to easily simulate diverse biological sequences and to enable more comprehensive evaluation of active search methods across heterogeneous data sets. The HBBS framework improves upon standard performance, wall-clock, and scalability benchmarks for batch search by using a broad exploration strategy across coarse partitions and fine-grained exploitation within each partition of structured data.

* Presented at the 2020 ICML Workshop on Real World Experiment Design and Active Learning 
Viaarxiv icon