Alert button
Picture for Shuran Song

Shuran Song

Alert button

Scaling Up and Distilling Down: Language-Guided Robot Skill Acquisition

Jul 26, 2023
Huy Ha, Pete Florence, Shuran Song

Figure 1 for Scaling Up and Distilling Down: Language-Guided Robot Skill Acquisition
Figure 2 for Scaling Up and Distilling Down: Language-Guided Robot Skill Acquisition
Figure 3 for Scaling Up and Distilling Down: Language-Guided Robot Skill Acquisition
Figure 4 for Scaling Up and Distilling Down: Language-Guided Robot Skill Acquisition

We present a framework for robot skill acquisition, which 1) efficiently scale up data generation of language-labelled robot data and 2) effectively distills this data down into a robust multi-task language-conditioned visuo-motor policy. For (1), we use a large language model (LLM) to guide high-level planning, and sampling-based robot planners (e.g. motion or grasp samplers) for generating diverse and rich manipulation trajectories. To robustify this data-collection process, the LLM also infers a code-snippet for the success condition of each task, simultaneously enabling the data-collection process to detect failure and retry as well as the automatic labeling of trajectories with success/failure. For (2), we extend the diffusion policy single-task behavior-cloning approach to multi-task settings with language conditioning. Finally, we propose a new multi-task benchmark with 18 tasks across five domains to test long-horizon behavior, common-sense reasoning, tool-use, and intuitive physics. We find that our distilled policy successfully learned the robust retrying behavior in its data collection policy, while improving absolute success rates by 34.8% on average across five domains. The benchmark, code, and qualitative results are on our website https://www.cs.columbia.edu/~huy/scalingup/

* 26 pages, 9 figures, videos and code links on website https://www.cs.columbia.edu/~huy/scalingup/ 
Viaarxiv icon

RICo: Rotate-Inpaint-Complete for Generalizable Scene Reconstruction

Jul 21, 2023
Isaac Kasahara, Shubham Agrawal, Selim Engin, Nikhil Chavan-Dafle, Shuran Song, Volkan Isler

Figure 1 for RICo: Rotate-Inpaint-Complete for Generalizable Scene Reconstruction
Figure 2 for RICo: Rotate-Inpaint-Complete for Generalizable Scene Reconstruction
Figure 3 for RICo: Rotate-Inpaint-Complete for Generalizable Scene Reconstruction
Figure 4 for RICo: Rotate-Inpaint-Complete for Generalizable Scene Reconstruction

General scene reconstruction refers to the task of estimating the full 3D geometry and texture of a scene containing previously unseen objects. In many practical applications such as AR/VR, autonomous navigation, and robotics, only a single view of the scene may be available, making the scene reconstruction a very challenging task. In this paper, we present a method for scene reconstruction by structurally breaking the problem into two steps: rendering novel views via inpainting and 2D to 3D scene lifting. Specifically, we leverage the generalization capability of large language models to inpaint the missing areas of scene color images rendered from different views. Next, we lift these inpainted images to 3D by predicting normals of the inpainted image and solving for the missing depth values. By predicting for normals instead of depth directly, our method allows for robustness to changes in depth distributions and scale. With rigorous quantitative evaluation, we show that our method outperforms multiple baselines while providing generalization to novel objects and scenes.

Viaarxiv icon

XSkill: Cross Embodiment Skill Discovery

Jul 19, 2023
Mengda Xu, Zhenjia Xu, Cheng Chi, Manuela Veloso, Shuran Song

Figure 1 for XSkill: Cross Embodiment Skill Discovery
Figure 2 for XSkill: Cross Embodiment Skill Discovery
Figure 3 for XSkill: Cross Embodiment Skill Discovery
Figure 4 for XSkill: Cross Embodiment Skill Discovery

Human demonstration videos are a widely available data source for robot learning and an intuitive user interface for expressing desired behavior. However, directly extracting reusable robot manipulation skills from unstructured human videos is challenging due to the big embodiment difference and unobserved action parameters. To bridge this embodiment gap, this paper introduces XSkill, an imitation learning framework that 1) discovers a cross-embodiment representation called skill prototypes purely from unlabeled human and robot manipulation videos, 2) transfers the skill representation to robot actions using conditional diffusion policy, and finally, 3) composes the learned skill to accomplish unseen tasks specified by a human prompt video. Our experiments in simulation and real-world environments show that the discovered skill prototypes facilitate both skill transfer and composition for unseen tasks, resulting in a more general and scalable imitation learning framework. The performance of XSkill is best understood from the anonymous website: https://xskillcorl.github.io.

Viaarxiv icon

REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction

Jul 12, 2023
Zeyi Liu, Arpit Bahety, Shuran Song

Figure 1 for REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction
Figure 2 for REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction
Figure 3 for REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction
Figure 4 for REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction

The ability to detect and analyze failed executions automatically is crucial for an explainable and robust robotic system. Recently, Large Language Models (LLMs) have demonstrated strong reasoning abilities on textual inputs. To leverage the power of LLM for robot failure explanation, we introduce a framework REFLECT, which queries LLM to identify and explain robot failures given a hierarchical summary of robot past experiences generated from multi-sensory data. Conditioned on the explanation, a task planner will generate an executable plan for the robot to correct the failure and complete the task. To systematically evaluate the framework, we create the RoboFail dataset with a variety of tasks and failure scenarios. We demonstrate that the LLM-based framework is able to generate informative failure explanations that assist successful correction planning. Videos and code available at: https://roboreflect.github.io/.

Viaarxiv icon

RoCo: Dialectic Multi-Robot Collaboration with Large Language Models

Jul 10, 2023
Zhao Mandi, Shreeya Jain, Shuran Song

Figure 1 for RoCo: Dialectic Multi-Robot Collaboration with Large Language Models
Figure 2 for RoCo: Dialectic Multi-Robot Collaboration with Large Language Models
Figure 3 for RoCo: Dialectic Multi-Robot Collaboration with Large Language Models
Figure 4 for RoCo: Dialectic Multi-Robot Collaboration with Large Language Models

We propose a novel approach to multi-robot collaboration that harnesses the power of pre-trained large language models (LLMs) for both high-level communication and low-level path planning. Robots are equipped with LLMs to discuss and collectively reason task strategies. They then generate sub-task plans and task space waypoint paths, which are used by a multi-arm motion planner to accelerate trajectory planning. We also provide feedback from the environment, such as collision checking, and prompt the LLM agents to improve their plan and waypoints in-context. For evaluation, we introduce RoCoBench, a 6-task benchmark covering a wide range of multi-robot collaboration scenarios, accompanied by a text-only dataset for agent representation and reasoning. We experimentally demonstrate the effectiveness of our approach -- it achieves high success rates across all tasks in RoCoBench and adapts to variations in task semantics. Our dialog setup offers high interpretability and flexibility -- in real world experiments, we show RoCo easily incorporates human-in-the-loop, where a user can communicate and collaborate with a robot agent to complete tasks together. See project website https://project-roco.github.io for videos and code.

Viaarxiv icon

General Part Assembly Planning

Jul 01, 2023
Yulong Li, Andy Zeng, Shuran Song

Most successes in autonomous robotic assembly have been restricted to single target or category. We propose to investigate general part assembly, the task of creating novel target assemblies with unseen part shapes. To tackle the planning of general part assembly, we present General Part Assembly Transformer (GPAT), a transformer based model architecture that accurately predicts part poses by inferring how each part shape corresponds to the target shape. Our experiments on both 3D CAD models and real-world scans demonstrate GPAT's generalization abilities to novel and diverse target and part shapes. Project website: https://general-part-assembly.github.io/

* Project website: https://general-part-assembly.github.io/ 
Viaarxiv icon

TidyBot: Personalized Robot Assistance with Large Language Models

May 09, 2023
Jimmy Wu, Rika Antonova, Adam Kan, Marion Lepert, Andy Zeng, Shuran Song, Jeannette Bohg, Szymon Rusinkiewicz, Thomas Funkhouser

Figure 1 for TidyBot: Personalized Robot Assistance with Large Language Models
Figure 2 for TidyBot: Personalized Robot Assistance with Large Language Models
Figure 3 for TidyBot: Personalized Robot Assistance with Large Language Models
Figure 4 for TidyBot: Personalized Robot Assistance with Large Language Models

For a robot to personalize physical assistance effectively, it must learn user preferences that can be generally reapplied to future scenarios. In this work, we investigate personalization of household cleanup with robots that can tidy up rooms by picking up objects and putting them away. A key challenge is determining the proper place to put each object, as people's preferences can vary greatly depending on personal taste or cultural background. For instance, one person may prefer storing shirts in the drawer, while another may prefer them on the shelf. We aim to build systems that can learn such preferences from just a handful of examples via prior interactions with a particular person. We show that robots can combine language-based planning and perception with the few-shot summarization capabilities of large language models (LLMs) to infer generalized user preferences that are broadly applicable to future interactions. This approach enables fast adaptation and achieves 91.2% accuracy on unseen objects in our benchmark dataset. We also demonstrate our approach on a real-world mobile manipulator called TidyBot, which successfully puts away 85.0% of objects in real-world test scenarios.

* Project page: https://tidybot.cs.princeton.edu 
Viaarxiv icon

DataComp: In search of the next generation of multimodal datasets

May 03, 2023
Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, Eyal Orgad, Rahim Entezari, Giannis Daras, Sarah Pratt, Vivek Ramanujan, Yonatan Bitton, Kalyani Marathe, Stephen Mussmann, Richard Vencu, Mehdi Cherti, Ranjay Krishna, Pang Wei Koh, Olga Saukh, Alexander Ratner, Shuran Song, Hannaneh Hajishirzi, Ali Farhadi, Romain Beaumont, Sewoong Oh, Alex Dimakis, Jenia Jitsev, Yair Carmon, Vaishaal Shankar, Ludwig Schmidt

Figure 1 for DataComp: In search of the next generation of multimodal datasets
Figure 2 for DataComp: In search of the next generation of multimodal datasets
Figure 3 for DataComp: In search of the next generation of multimodal datasets
Figure 4 for DataComp: In search of the next generation of multimodal datasets

Large multimodal datasets have been instrumental in recent breakthroughs such as CLIP, Stable Diffusion, and GPT-4. At the same time, datasets rarely receive the same research attention as model architectures or training algorithms. To address this shortcoming in the machine learning ecosystem, we introduce DataComp, a benchmark where the training code is fixed and researchers innovate by proposing new training sets. We provide a testbed for dataset experiments centered around a new candidate pool of 12.8B image-text pairs from Common Crawl. Participants in our benchmark design new filtering techniques or curate new data sources and then evaluate their new dataset by running our standardized CLIP training code and testing on 38 downstream test sets. Our benchmark consists of multiple scales, with four candidate pool sizes and associated compute budgets ranging from 12.8M to 12.8B samples seen during training. This multi-scale design facilitates the study of scaling trends and makes the benchmark accessible to researchers with varying resources. Our baseline experiments show that the DataComp workflow is a promising way of improving multimodal datasets. We introduce DataComp-1B, a dataset created by applying a simple filtering algorithm to the 12.8B candidate pool. The resulting 1.4B subset enables training a CLIP ViT-L/14 from scratch to 79.2% zero-shot accuracy on ImageNet. Our new ViT-L/14 model outperforms a larger ViT-g/14 trained on LAION-2B by 0.7 percentage points while requiring 9x less training compute. We also outperform OpenAI's CLIP ViT-L/14 by 3.7 percentage points, which is trained with the same compute budget as our model. These gains highlight the potential for improving model performance by carefully curating training sets. We view DataComp-1B as only the first step and hope that DataComp paves the way toward the next generation of multimodal datasets.

Viaarxiv icon