Alert button
Picture for Kevin Black

Kevin Black

Alert button

Zero-Shot Robotic Manipulation with Pretrained Image-Editing Diffusion Models

Oct 16, 2023
Kevin Black, Mitsuhiko Nakamoto, Pranav Atreya, Homer Walke, Chelsea Finn, Aviral Kumar, Sergey Levine

If generalist robots are to operate in truly unstructured environments, they need to be able to recognize and reason about novel objects and scenarios. Such objects and scenarios might not be present in the robot's own training data. We propose SuSIE, a method that leverages an image-editing diffusion model to act as a high-level planner by proposing intermediate subgoals that a low-level controller can accomplish. Specifically, we finetune InstructPix2Pix on video data, consisting of both human videos and robot rollouts, such that it outputs hypothetical future "subgoal" observations given the robot's current observation and a language command. We also use the robot data to train a low-level goal-conditioned policy to act as the aforementioned low-level controller. We find that the high-level subgoal predictions can utilize Internet-scale pretraining and visual understanding to guide the low-level goal-conditioned policy, achieving significantly better generalization and precision than conventional language-conditioned policies. We achieve state-of-the-art results on the CALVIN benchmark, and also demonstrate robust generalization on real-world manipulation tasks, beating strong baselines that have access to privileged information or that utilize orders of magnitude more compute and training data. The project website can be found at http://rail-berkeley.github.io/susie .

* 22 pages, 8 figures 
Viaarxiv icon

BridgeData V2: A Dataset for Robot Learning at Scale

Aug 24, 2023
Homer Walke, Kevin Black, Abraham Lee, Moo Jin Kim, Max Du, Chongyi Zheng, Tony Zhao, Philippe Hansen-Estruch, Quan Vuong, Andre He, Vivek Myers, Kuan Fang, Chelsea Finn, Sergey Levine

Figure 1 for BridgeData V2: A Dataset for Robot Learning at Scale
Figure 2 for BridgeData V2: A Dataset for Robot Learning at Scale
Figure 3 for BridgeData V2: A Dataset for Robot Learning at Scale
Figure 4 for BridgeData V2: A Dataset for Robot Learning at Scale

We introduce BridgeData V2, a large and diverse dataset of robotic manipulation behaviors designed to facilitate research on scalable robot learning. BridgeData V2 contains 60,096 trajectories collected across 24 environments on a publicly available low-cost robot. BridgeData V2 provides extensive task and environment variability, leading to skills that can generalize across environments, domains, and institutions, making the dataset a useful resource for a broad range of researchers. Additionally, the dataset is compatible with a wide variety of open-vocabulary, multi-task learning methods conditioned on goal images or natural language instructions. In our experiments, we train 6 state-of-the-art imitation learning and offline reinforcement learning methods on our dataset, and find that they succeed on a suite of tasks requiring varying amounts of generalization. We also demonstrate that the performance of these methods improves with more data and higher capacity models, and that training on a greater variety of skills leads to improved generalization. By publicly sharing BridgeData V2 and our pre-trained models, we aim to accelerate research in scalable robot learning methods. Project page at https://rail-berkeley.github.io/bridgedata

* 9 pages 
Viaarxiv icon

ViNT: A Foundation Model for Visual Navigation

Jun 26, 2023
Dhruv Shah, Ajay Sridhar, Nitish Dashora, Kyle Stachowicz, Kevin Black, Noriaki Hirose, Sergey Levine

Figure 1 for ViNT: A Foundation Model for Visual Navigation
Figure 2 for ViNT: A Foundation Model for Visual Navigation
Figure 3 for ViNT: A Foundation Model for Visual Navigation
Figure 4 for ViNT: A Foundation Model for Visual Navigation

General-purpose pre-trained models ("foundation models") have enabled practitioners to produce generalizable solutions for individual machine learning problems with datasets that are significantly smaller than those required for learning from scratch. Such models are typically trained on large and diverse datasets with weak supervision, consuming much more training data than is available for any individual downstream application. In this paper, we describe the Visual Navigation Transformer (ViNT), a foundation model that aims to bring the success of general-purpose pre-trained models to vision-based robotic navigation. ViNT is trained with a general goal-reaching objective that can be used with any navigation dataset, and employs a flexible Transformer-based architecture to learn navigational affordances and enable efficient adaptation to a variety of downstream navigational tasks. ViNT is trained on a number of existing navigation datasets, comprising hundreds of hours of robotic navigation from a variety of different robotic platforms, and exhibits positive transfer, outperforming specialist models trained on singular datasets. ViNT can be augmented with diffusion-based subgoal proposals to explore novel environments, and can solve kilometer-scale navigation problems when equipped with long-range heuristics. ViNT can also be adapted to novel task specifications with a technique inspired by prompt-tuning, where the goal encoder is replaced by an encoding of another task modality (e.g., GPS waypoints or routing commands) embedded into the same space of goal tokens. This flexibility and ability to accommodate a variety of downstream problem domains establishes ViNT as an effective foundation model for mobile robotics. For videos, code, and model checkpoints, see our project page at https://visualnav-transformer.github.io.

Viaarxiv icon

Granger-Causal Hierarchical Skill Discovery

Jun 15, 2023
Caleb Chuck, Kevin Black, Aditya Arjun, Yuke Zhu, Scott Niekum

Figure 1 for Granger-Causal Hierarchical Skill Discovery
Figure 2 for Granger-Causal Hierarchical Skill Discovery
Figure 3 for Granger-Causal Hierarchical Skill Discovery
Figure 4 for Granger-Causal Hierarchical Skill Discovery

Reinforcement Learning (RL) has shown promising results learning policies for complex tasks, but can often suffer from low sample efficiency and limited transfer. We introduce the Hierarchy of Interaction Skills (HIntS) algorithm, which uses learned interaction detectors to discover and train a hierarchy of skills that manipulate factors in factored environments. Inspired by Granger causality, these unsupervised detectors capture key events between factors to sample efficiently learn useful skills and transfer those skills to other related tasks -- tasks where many reinforcement learning techniques struggle. We evaluate HIntS on a robotic pushing task with obstacles -- a challenging domain where other RL and HRL methods fall short. The learned skills not only demonstrate transfer using variants of Breakout, a common RL benchmark, but also show 2-3x improvement in both sample efficiency and final performance compared to comparable RL baselines. Together, HIntS demonstrates a proof of concept for using Granger-causal relationships for skill discovery.

* Under Submission 
Viaarxiv icon

Training Diffusion Models with Reinforcement Learning

May 23, 2023
Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, Sergey Levine

Figure 1 for Training Diffusion Models with Reinforcement Learning
Figure 2 for Training Diffusion Models with Reinforcement Learning
Figure 3 for Training Diffusion Models with Reinforcement Learning
Figure 4 for Training Diffusion Models with Reinforcement Learning

Diffusion models are a class of flexible generative models trained with an approximation to the log-likelihood objective. However, most use cases of diffusion models are not concerned with likelihoods, but instead with downstream objectives such as human-perceived image quality or drug effectiveness. In this paper, we investigate reinforcement learning methods for directly optimizing diffusion models for such objectives. We describe how posing denoising as a multi-step decision-making problem enables a class of policy gradient algorithms, which we refer to as denoising diffusion policy optimization (DDPO), that are more effective than alternative reward-weighted likelihood approaches. Empirically, DDPO is able to adapt text-to-image diffusion models to objectives that are difficult to express via prompting, such as image compressibility, and those derived from human feedback, such as aesthetic quality. Finally, we show that DDPO can improve prompt-image alignment using feedback from a vision-language model without the need for additional data collection or human annotation.

* 20 pages, 12 figures 
Viaarxiv icon

Real-Time, Flight-Ready, Non-Cooperative Spacecraft Pose Estimation Using Monocular Imagery

Jan 23, 2021
Kevin Black, Shrivu Shankar, Daniel Fonseka, Jacob Deutsch, Abhimanyu Dhir, Maruthi R. Akella

Figure 1 for Real-Time, Flight-Ready, Non-Cooperative Spacecraft Pose Estimation Using Monocular Imagery
Figure 2 for Real-Time, Flight-Ready, Non-Cooperative Spacecraft Pose Estimation Using Monocular Imagery
Figure 3 for Real-Time, Flight-Ready, Non-Cooperative Spacecraft Pose Estimation Using Monocular Imagery
Figure 4 for Real-Time, Flight-Ready, Non-Cooperative Spacecraft Pose Estimation Using Monocular Imagery

A key requirement for autonomous on-orbit proximity operations is the estimation of a target spacecraft's relative pose (position and orientation). It is desirable to employ monocular cameras for this problem due to their low cost, weight, and power requirements. This work presents a novel convolutional neural network (CNN)-based monocular pose estimation system that achieves state-of-the-art accuracy with low computational demand. In combination with a Blender-based synthetic data generation scheme, the system demonstrates the ability to generalize from purely synthetic training data to real in-space imagery of the Northrop Grumman Enhanced Cygnus spacecraft. Additionally, the system achieves real-time performance on low-power flight-like hardware.

* Presented at the 31st AAS/AIAA Space Flight Mechanics Meeting, February 2021. 16 pages, 7 figures 
Viaarxiv icon

A Pipeline for Vision-Based On-Orbit Proximity Operations Using Deep Learning and Synthetic Imagery

Jan 14, 2021
Carson Schubert, Kevin Black, Daniel Fonseka, Abhimanyu Dhir, Jacob Deutsch, Nihal Dhamani, Gavin Martin, Maruthi Akella

Figure 1 for A Pipeline for Vision-Based On-Orbit Proximity Operations Using Deep Learning and Synthetic Imagery
Figure 2 for A Pipeline for Vision-Based On-Orbit Proximity Operations Using Deep Learning and Synthetic Imagery
Figure 3 for A Pipeline for Vision-Based On-Orbit Proximity Operations Using Deep Learning and Synthetic Imagery
Figure 4 for A Pipeline for Vision-Based On-Orbit Proximity Operations Using Deep Learning and Synthetic Imagery

Deep learning has become the gold standard for image processing over the past decade. Simultaneously, we have seen growing interest in orbital activities such as satellite servicing and debris removal that depend on proximity operations between spacecraft. However, two key challenges currently pose a major barrier to the use of deep learning for vision-based on-orbit proximity operations. Firstly, efficient implementation of these techniques relies on an effective system for model development that streamlines data curation, training, and evaluation. Secondly, a scarcity of labeled training data (images of a target spacecraft) hinders creation of robust deep learning models. This paper presents an open-source deep learning pipeline, developed specifically for on-orbit visual navigation applications, that addresses these challenges. The core of our work consists of two custom software tools built on top of a cloud architecture that interconnects all stages of the model development process. The first tool leverages Blender, an open-source 3D graphics toolset, to generate labeled synthetic training data with configurable model poses (positions and orientations), lighting conditions, backgrounds, and commonly observed in-space image aberrations. The second tool is a plugin-based framework for effective dataset curation and model training; it provides common functionality like metadata generation and remote storage access to all projects while giving complete independence to project-specific code. Time-consuming, graphics-intensive processes such as synthetic image generation and model training run on cloud-based computational resources which scale to any scope and budget and allow development of even the largest datasets and models from any machine. The presented system has been used in the Texas Spacecraft Laboratory with marked benefits in development speed and quality.

* Accepted to IEEE Aerospace Conference 2021. 14 pages, 11 figures 
Viaarxiv icon