Picture for Dorsa Sadigh

Dorsa Sadigh

Google DeepMind

Efficient Data Collection for Robotic Manipulation via Compositional Generalization

Add code
Mar 08, 2024
Figure 1 for Efficient Data Collection for Robotic Manipulation via Compositional Generalization
Figure 2 for Efficient Data Collection for Robotic Manipulation via Compositional Generalization
Figure 3 for Efficient Data Collection for Robotic Manipulation via Compositional Generalization
Figure 4 for Efficient Data Collection for Robotic Manipulation via Compositional Generalization
Viaarxiv icon

RT-Sketch: Goal-Conditioned Imitation Learning from Hand-Drawn Sketches

Add code
Mar 05, 2024
Figure 1 for RT-Sketch: Goal-Conditioned Imitation Learning from Hand-Drawn Sketches
Figure 2 for RT-Sketch: Goal-Conditioned Imitation Learning from Hand-Drawn Sketches
Figure 3 for RT-Sketch: Goal-Conditioned Imitation Learning from Hand-Drawn Sketches
Figure 4 for RT-Sketch: Goal-Conditioned Imitation Learning from Hand-Drawn Sketches
Viaarxiv icon

RT-H: Action Hierarchies Using Language

Add code
Mar 04, 2024
Figure 1 for RT-H: Action Hierarchies Using Language
Figure 2 for RT-H: Action Hierarchies Using Language
Figure 3 for RT-H: Action Hierarchies Using Language
Figure 4 for RT-H: Action Hierarchies Using Language
Viaarxiv icon

Pushing the Limits of Cross-Embodiment Learning for Manipulation and Navigation

Add code
Feb 29, 2024
Figure 1 for Pushing the Limits of Cross-Embodiment Learning for Manipulation and Navigation
Figure 2 for Pushing the Limits of Cross-Embodiment Learning for Manipulation and Navigation
Figure 3 for Pushing the Limits of Cross-Embodiment Learning for Manipulation and Navigation
Figure 4 for Pushing the Limits of Cross-Embodiment Learning for Manipulation and Navigation
Viaarxiv icon

Batch Active Learning of Reward Functions from Human Preferences

Add code
Feb 24, 2024
Viaarxiv icon

Learning to Learn Faster from Human Feedback with Language Model Predictive Control

Add code
Feb 18, 2024
Figure 1 for Learning to Learn Faster from Human Feedback with Language Model Predictive Control
Figure 2 for Learning to Learn Faster from Human Feedback with Language Model Predictive Control
Figure 3 for Learning to Learn Faster from Human Feedback with Language Model Predictive Control
Figure 4 for Learning to Learn Faster from Human Feedback with Language Model Predictive Control
Viaarxiv icon

Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models

Add code
Feb 12, 2024
Figure 1 for Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models
Figure 2 for Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models
Figure 3 for Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models
Figure 4 for Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models
Viaarxiv icon

Generative Expressive Robot Behaviors using Large Language Models

Add code
Jan 30, 2024
Viaarxiv icon

AutoRT: Embodied Foundation Models for Large Scale Orchestration of Robotic Agents

Add code
Jan 23, 2024
Viaarxiv icon

SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities

Add code
Jan 22, 2024
Viaarxiv icon