Picture for Norman Di Palo

Norman Di Palo

R+X: Retrieval and Execution from Everyday Human Videos

Add code
Jul 17, 2024
Viaarxiv icon

Keypoint Action Tokens Enable In-Context Imitation Learning in Robotics

Add code
Mar 28, 2024
Figure 1 for Keypoint Action Tokens Enable In-Context Imitation Learning in Robotics
Figure 2 for Keypoint Action Tokens Enable In-Context Imitation Learning in Robotics
Figure 3 for Keypoint Action Tokens Enable In-Context Imitation Learning in Robotics
Figure 4 for Keypoint Action Tokens Enable In-Context Imitation Learning in Robotics
Viaarxiv icon

DINOBot: Robot Manipulation via Retrieval and Alignment with Vision Foundation Models

Add code
Feb 20, 2024
Figure 1 for DINOBot: Robot Manipulation via Retrieval and Alignment with Vision Foundation Models
Figure 2 for DINOBot: Robot Manipulation via Retrieval and Alignment with Vision Foundation Models
Figure 3 for DINOBot: Robot Manipulation via Retrieval and Alignment with Vision Foundation Models
Figure 4 for DINOBot: Robot Manipulation via Retrieval and Alignment with Vision Foundation Models
Viaarxiv icon

On the Effectiveness of Retrieval, Alignment, and Replay in Manipulation

Add code
Dec 19, 2023
Viaarxiv icon

Open X-Embodiment: Robotic Learning Datasets and RT-X Models

Add code
Oct 17, 2023
Figure 1 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 2 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 3 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 4 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Viaarxiv icon

Language Models as Zero-Shot Trajectory Generators

Add code
Oct 17, 2023
Viaarxiv icon

Towards A Unified Agent with Foundation Models

Add code
Jul 18, 2023
Figure 1 for Towards A Unified Agent with Foundation Models
Figure 2 for Towards A Unified Agent with Foundation Models
Figure 3 for Towards A Unified Agent with Foundation Models
Figure 4 for Towards A Unified Agent with Foundation Models
Viaarxiv icon

Demonstrate Once, Imitate Immediately (DOME): Learning Visual Servoing for One-Shot Imitation Learning

Add code
Apr 06, 2022
Figure 1 for Demonstrate Once, Imitate Immediately (DOME): Learning Visual Servoing for One-Shot Imitation Learning
Figure 2 for Demonstrate Once, Imitate Immediately (DOME): Learning Visual Servoing for One-Shot Imitation Learning
Figure 3 for Demonstrate Once, Imitate Immediately (DOME): Learning Visual Servoing for One-Shot Imitation Learning
Figure 4 for Demonstrate Once, Imitate Immediately (DOME): Learning Visual Servoing for One-Shot Imitation Learning
Viaarxiv icon

Learning Multi-Stage Tasks with One Demonstration via Self-Replay

Add code
Nov 14, 2021
Figure 1 for Learning Multi-Stage Tasks with One Demonstration via Self-Replay
Figure 2 for Learning Multi-Stage Tasks with One Demonstration via Self-Replay
Figure 3 for Learning Multi-Stage Tasks with One Demonstration via Self-Replay
Figure 4 for Learning Multi-Stage Tasks with One Demonstration via Self-Replay
Viaarxiv icon

Coarse-to-Fine for Sim-to-Real: Sub-Millimetre Precision Across the Workspace

Add code
May 24, 2021
Figure 1 for Coarse-to-Fine for Sim-to-Real: Sub-Millimetre Precision Across the Workspace
Figure 2 for Coarse-to-Fine for Sim-to-Real: Sub-Millimetre Precision Across the Workspace
Figure 3 for Coarse-to-Fine for Sim-to-Real: Sub-Millimetre Precision Across the Workspace
Figure 4 for Coarse-to-Fine for Sim-to-Real: Sub-Millimetre Precision Across the Workspace
Viaarxiv icon