Picture for Ryan Julian

Ryan Julian

AutoRT: Embodied Foundation Models for Large Scale Orchestration of Robotic Agents

Add code
Jan 23, 2024
Viaarxiv icon

Conditionally Combining Robot Skills using Large Language Models

Add code
Oct 25, 2023
Viaarxiv icon

Open X-Embodiment: Robotic Learning Datasets and RT-X Models

Add code
Oct 17, 2023
Figure 1 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 2 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 3 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 4 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Viaarxiv icon

RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control

Add code
Jul 28, 2023
Figure 1 for RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Figure 2 for RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Figure 3 for RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Figure 4 for RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Viaarxiv icon

Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators

Add code
May 05, 2023
Figure 1 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Figure 2 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Figure 3 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Figure 4 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Viaarxiv icon

RT-1: Robotics Transformer for Real-World Control at Scale

Add code
Dec 13, 2022
Figure 1 for RT-1: Robotics Transformer for Real-World Control at Scale
Figure 2 for RT-1: Robotics Transformer for Real-World Control at Scale
Figure 3 for RT-1: Robotics Transformer for Real-World Control at Scale
Figure 4 for RT-1: Robotics Transformer for Real-World Control at Scale
Viaarxiv icon

Do As I Can, Not As I Say: Grounding Language in Robotic Affordances

Add code
Apr 04, 2022
Figure 1 for Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
Figure 2 for Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
Figure 3 for Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
Figure 4 for Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
Viaarxiv icon

A Simple Approach to Continual Learning by Transferring Skill Parameters

Add code
Oct 19, 2021
Figure 1 for A Simple Approach to Continual Learning by Transferring Skill Parameters
Figure 2 for A Simple Approach to Continual Learning by Transferring Skill Parameters
Figure 3 for A Simple Approach to Continual Learning by Transferring Skill Parameters
Figure 4 for A Simple Approach to Continual Learning by Transferring Skill Parameters
Viaarxiv icon

Towards Exploiting Geometry and Time for Fast Off-Distribution Adaptation in Multi-Task Robot Learning

Add code
Jun 29, 2021
Figure 1 for Towards Exploiting Geometry and Time for Fast Off-Distribution Adaptation in Multi-Task Robot Learning
Figure 2 for Towards Exploiting Geometry and Time for Fast Off-Distribution Adaptation in Multi-Task Robot Learning
Viaarxiv icon

Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills

Add code
Apr 28, 2021
Figure 1 for Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills
Figure 2 for Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills
Figure 3 for Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills
Figure 4 for Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills
Viaarxiv icon