Picture for Karol Hausman

Karol Hausman

Foundations for Transfer in Reinforcement Learning: A Taxonomy of Knowledge Modalities

Add code
Dec 04, 2023
Viaarxiv icon

RT-Trajectory: Robotic Task Generalization via Hindsight Trajectory Sketches

Add code
Nov 06, 2023
Figure 1 for RT-Trajectory: Robotic Task Generalization via Hindsight Trajectory Sketches
Figure 2 for RT-Trajectory: Robotic Task Generalization via Hindsight Trajectory Sketches
Figure 3 for RT-Trajectory: Robotic Task Generalization via Hindsight Trajectory Sketches
Figure 4 for RT-Trajectory: Robotic Task Generalization via Hindsight Trajectory Sketches
Viaarxiv icon

RoboVQA: Multimodal Long-Horizon Reasoning for Robotics

Add code
Nov 01, 2023
Figure 1 for RoboVQA: Multimodal Long-Horizon Reasoning for Robotics
Figure 2 for RoboVQA: Multimodal Long-Horizon Reasoning for Robotics
Figure 3 for RoboVQA: Multimodal Long-Horizon Reasoning for Robotics
Figure 4 for RoboVQA: Multimodal Long-Horizon Reasoning for Robotics
Viaarxiv icon

Open X-Embodiment: Robotic Learning Datasets and RT-X Models

Add code
Oct 17, 2023
Figure 1 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 2 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 3 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 4 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Viaarxiv icon

Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions

Add code
Sep 18, 2023
Figure 1 for Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions
Figure 2 for Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions
Figure 3 for Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions
Figure 4 for Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions
Viaarxiv icon

RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control

Add code
Jul 28, 2023
Viaarxiv icon

Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators

Add code
May 05, 2023
Figure 1 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Figure 2 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Figure 3 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Figure 4 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Viaarxiv icon

PaLM-E: An Embodied Multimodal Language Model

Add code
Mar 06, 2023
Figure 1 for PaLM-E: An Embodied Multimodal Language Model
Figure 2 for PaLM-E: An Embodied Multimodal Language Model
Figure 3 for PaLM-E: An Embodied Multimodal Language Model
Figure 4 for PaLM-E: An Embodied Multimodal Language Model
Viaarxiv icon

Open-World Object Manipulation using Pre-trained Vision-Language Models

Add code
Mar 02, 2023
Figure 1 for Open-World Object Manipulation using Pre-trained Vision-Language Models
Figure 2 for Open-World Object Manipulation using Pre-trained Vision-Language Models
Figure 3 for Open-World Object Manipulation using Pre-trained Vision-Language Models
Figure 4 for Open-World Object Manipulation using Pre-trained Vision-Language Models
Viaarxiv icon

Grounded Decoding: Guiding Text Generation with Grounded Models for Robot Control

Add code
Mar 01, 2023
Figure 1 for Grounded Decoding: Guiding Text Generation with Grounded Models for Robot Control
Figure 2 for Grounded Decoding: Guiding Text Generation with Grounded Models for Robot Control
Figure 3 for Grounded Decoding: Guiding Text Generation with Grounded Models for Robot Control
Figure 4 for Grounded Decoding: Guiding Text Generation with Grounded Models for Robot Control
Viaarxiv icon