Alert button
Picture for Andre He

Andre He

Alert button

BridgeData V2: A Dataset for Robot Learning at Scale

Aug 24, 2023
Homer Walke, Kevin Black, Abraham Lee, Moo Jin Kim, Max Du, Chongyi Zheng, Tony Zhao, Philippe Hansen-Estruch, Quan Vuong, Andre He, Vivek Myers, Kuan Fang, Chelsea Finn, Sergey Levine

Figure 1 for BridgeData V2: A Dataset for Robot Learning at Scale
Figure 2 for BridgeData V2: A Dataset for Robot Learning at Scale
Figure 3 for BridgeData V2: A Dataset for Robot Learning at Scale
Figure 4 for BridgeData V2: A Dataset for Robot Learning at Scale

We introduce BridgeData V2, a large and diverse dataset of robotic manipulation behaviors designed to facilitate research on scalable robot learning. BridgeData V2 contains 60,096 trajectories collected across 24 environments on a publicly available low-cost robot. BridgeData V2 provides extensive task and environment variability, leading to skills that can generalize across environments, domains, and institutions, making the dataset a useful resource for a broad range of researchers. Additionally, the dataset is compatible with a wide variety of open-vocabulary, multi-task learning methods conditioned on goal images or natural language instructions. In our experiments, we train 6 state-of-the-art imitation learning and offline reinforcement learning methods on our dataset, and find that they succeed on a suite of tasks requiring varying amounts of generalization. We also demonstrate that the performance of these methods improves with more data and higher capacity models, and that training on a greater variety of skills leads to improved generalization. By publicly sharing BridgeData V2 and our pre-trained models, we aim to accelerate research in scalable robot learning methods. Project page at https://rail-berkeley.github.io/bridgedata

* 9 pages 
Viaarxiv icon

Goal Representations for Instruction Following: A Semi-Supervised Language Interface to Control

Jun 30, 2023
Vivek Myers, Andre He, Kuan Fang, Homer Walke, Philippe Hansen-Estruch, Ching-An Cheng, Mihai Jalobeanu, Andrey Kolobov, Anca Dragan, Sergey Levine

Figure 1 for Goal Representations for Instruction Following: A Semi-Supervised Language Interface to Control
Figure 2 for Goal Representations for Instruction Following: A Semi-Supervised Language Interface to Control
Figure 3 for Goal Representations for Instruction Following: A Semi-Supervised Language Interface to Control
Figure 4 for Goal Representations for Instruction Following: A Semi-Supervised Language Interface to Control

Our goal is for robots to follow natural language instructions like "put the towel next to the microwave." But getting large amounts of labeled data, i.e. data that contains demonstrations of tasks labeled with the language instruction, is prohibitive. In contrast, obtaining policies that respond to image goals is much easier, because any autonomous trial or demonstration can be labeled in hindsight with its final state as the goal. In this work, we contribute a method that taps into joint image- and goal- conditioned policies with language using only a small amount of language data. Prior work has made progress on this using vision-language models or by jointly training language-goal-conditioned policies, but so far neither method has scaled effectively to real-world robot tasks without significant human annotation. Our method achieves robust performance in the real world by learning an embedding from the labeled data that aligns language not to the goal image, but rather to the desired change between the start and goal images that the instruction corresponds to. We then train a policy on this embedding: the policy benefits from all the unlabeled data, but the aligned embedding provides an interface for language to steer the policy. We show instruction following across a variety of manipulation tasks in different scenes, with generalization to language instructions outside of the labeled data. Videos and code for our approach can be found on our website: http://tiny.cc/grif .

* 15 pages, 5 figures 
Viaarxiv icon

Neural Unsupervised Reconstruction of Protolanguage Word Forms

Nov 16, 2022
Andre He, Nicholas Tomlin, Dan Klein

Figure 1 for Neural Unsupervised Reconstruction of Protolanguage Word Forms
Figure 2 for Neural Unsupervised Reconstruction of Protolanguage Word Forms
Figure 3 for Neural Unsupervised Reconstruction of Protolanguage Word Forms
Figure 4 for Neural Unsupervised Reconstruction of Protolanguage Word Forms

We present a state-of-the-art neural approach to the unsupervised reconstruction of ancient word forms. Previous work in this domain used expectation-maximization to predict simple phonological changes between ancient word forms and their cognates in modern languages. We extend this work with neural models that can capture more complicated phonological and morphological changes. At the same time, we preserve the inductive biases from classical methods by building monotonic alignment constraints into the model and deliberately underfitting during the maximization step. We evaluate our performance on the task of reconstructing Latin from a dataset of cognates across five Romance languages, achieving a notable reduction in edit distance from the target word forms compared to previous methods.

Viaarxiv icon

Understanding Game-Playing Agents with Natural Language Annotations

Apr 15, 2022
Nicholas Tomlin, Andre He, Dan Klein

Figure 1 for Understanding Game-Playing Agents with Natural Language Annotations
Figure 2 for Understanding Game-Playing Agents with Natural Language Annotations
Figure 3 for Understanding Game-Playing Agents with Natural Language Annotations
Figure 4 for Understanding Game-Playing Agents with Natural Language Annotations

We present a new dataset containing 10K human-annotated games of Go and show how these natural language annotations can be used as a tool for model interpretability. Given a board state and its associated comment, our approach uses linear probing to predict mentions of domain-specific terms (e.g., ko, atari) from the intermediate state representations of game-playing agents like AlphaGo Zero. We find these game concepts are nontrivially encoded in two distinct policy networks, one trained via imitation learning and another trained via reinforcement learning. Furthermore, mentions of domain-specific terms are most easily predicted from the later layers of both models, suggesting that these policy networks encode high-level abstractions similar to those used in the natural language annotations.

Viaarxiv icon