Picture for Karl Pertsch

Karl Pertsch

Yell At Your Robot: Improving On-the-Fly from Language Corrections

Add code
Mar 19, 2024
Viaarxiv icon

DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset

Add code
Mar 19, 2024
Figure 1 for DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset
Figure 2 for DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset
Figure 3 for DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset
Figure 4 for DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset
Viaarxiv icon

LiFT: Unsupervised Reinforcement Learning with Foundation Models as Teachers

Add code
Dec 14, 2023
Figure 1 for LiFT: Unsupervised Reinforcement Learning with Foundation Models as Teachers
Figure 2 for LiFT: Unsupervised Reinforcement Learning with Foundation Models as Teachers
Figure 3 for LiFT: Unsupervised Reinforcement Learning with Foundation Models as Teachers
Figure 4 for LiFT: Unsupervised Reinforcement Learning with Foundation Models as Teachers
Viaarxiv icon

Open X-Embodiment: Robotic Learning Datasets and RT-X Models

Add code
Oct 17, 2023
Figure 1 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 2 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 3 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 4 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Viaarxiv icon

Bootstrap Your Own Skills: Learning to Solve New Tasks with Large Language Model Guidance

Add code
Oct 17, 2023
Viaarxiv icon

RoboCLIP: One Demonstration is Enough to Learn Robot Policies

Add code
Oct 11, 2023
Figure 1 for RoboCLIP: One Demonstration is Enough to Learn Robot Policies
Figure 2 for RoboCLIP: One Demonstration is Enough to Learn Robot Policies
Figure 3 for RoboCLIP: One Demonstration is Enough to Learn Robot Policies
Figure 4 for RoboCLIP: One Demonstration is Enough to Learn Robot Policies
Viaarxiv icon

Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions

Add code
Sep 18, 2023
Figure 1 for Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions
Figure 2 for Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions
Figure 3 for Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions
Figure 4 for Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions
Viaarxiv icon

RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control

Add code
Jul 28, 2023
Viaarxiv icon

SPRINT: Scalable Policy Pre-Training via Language Instruction Relabeling

Add code
Jun 20, 2023
Viaarxiv icon

Cross-Domain Transfer via Semantic Skill Imitation

Add code
Dec 14, 2022
Figure 1 for Cross-Domain Transfer via Semantic Skill Imitation
Figure 2 for Cross-Domain Transfer via Semantic Skill Imitation
Figure 3 for Cross-Domain Transfer via Semantic Skill Imitation
Figure 4 for Cross-Domain Transfer via Semantic Skill Imitation
Viaarxiv icon