Picture for Mohit Shridhar

Mohit Shridhar

GenSim: Generating Robotic Simulation Tasks via Large Language Models

Add code
Oct 02, 2023
Figure 1 for GenSim: Generating Robotic Simulation Tasks via Large Language Models
Figure 2 for GenSim: Generating Robotic Simulation Tasks via Large Language Models
Figure 3 for GenSim: Generating Robotic Simulation Tasks via Large Language Models
Figure 4 for GenSim: Generating Robotic Simulation Tasks via Large Language Models
Viaarxiv icon

AR2-D2:Training a Robot Without a Robot

Jun 23, 2023
Figure 1 for AR2-D2:Training a Robot Without a Robot
Figure 2 for AR2-D2:Training a Robot Without a Robot
Figure 3 for AR2-D2:Training a Robot Without a Robot
Figure 4 for AR2-D2:Training a Robot Without a Robot
Viaarxiv icon

Retrospectives on the Embodied AI Workshop

Oct 17, 2022
Figure 1 for Retrospectives on the Embodied AI Workshop
Figure 2 for Retrospectives on the Embodied AI Workshop
Figure 3 for Retrospectives on the Embodied AI Workshop
Figure 4 for Retrospectives on the Embodied AI Workshop
Viaarxiv icon

Perceiver-Actor: A Multi-Task Transformer for Robotic Manipulation

Add code
Sep 12, 2022
Figure 1 for Perceiver-Actor: A Multi-Task Transformer for Robotic Manipulation
Figure 2 for Perceiver-Actor: A Multi-Task Transformer for Robotic Manipulation
Figure 3 for Perceiver-Actor: A Multi-Task Transformer for Robotic Manipulation
Figure 4 for Perceiver-Actor: A Multi-Task Transformer for Robotic Manipulation
Viaarxiv icon

CLIPort: What and Where Pathways for Robotic Manipulation

Add code
Sep 24, 2021
Figure 1 for CLIPort: What and Where Pathways for Robotic Manipulation
Figure 2 for CLIPort: What and Where Pathways for Robotic Manipulation
Figure 3 for CLIPort: What and Where Pathways for Robotic Manipulation
Figure 4 for CLIPort: What and Where Pathways for Robotic Manipulation
Viaarxiv icon

Language Grounding with 3D Objects

Add code
Jul 26, 2021
Figure 1 for Language Grounding with 3D Objects
Figure 2 for Language Grounding with 3D Objects
Figure 3 for Language Grounding with 3D Objects
Figure 4 for Language Grounding with 3D Objects
Viaarxiv icon

ALFWorld: Aligning Text and Embodied Environments for Interactive Learning

Add code
Oct 08, 2020
Figure 1 for ALFWorld: Aligning Text and Embodied Environments for Interactive Learning
Figure 2 for ALFWorld: Aligning Text and Embodied Environments for Interactive Learning
Figure 3 for ALFWorld: Aligning Text and Embodied Environments for Interactive Learning
Figure 4 for ALFWorld: Aligning Text and Embodied Environments for Interactive Learning
Viaarxiv icon

ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks

Add code
Dec 03, 2019
Figure 1 for ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
Figure 2 for ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
Figure 3 for ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
Figure 4 for ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
Viaarxiv icon

Interactive Visual Grounding of Referring Expressions for Human-Robot Interaction

Jun 11, 2018
Figure 1 for Interactive Visual Grounding of Referring Expressions for Human-Robot Interaction
Figure 2 for Interactive Visual Grounding of Referring Expressions for Human-Robot Interaction
Figure 3 for Interactive Visual Grounding of Referring Expressions for Human-Robot Interaction
Figure 4 for Interactive Visual Grounding of Referring Expressions for Human-Robot Interaction
Viaarxiv icon

Grounding Spatio-Semantic Referring Expressions for Human-Robot Interaction

Jul 18, 2017
Figure 1 for Grounding Spatio-Semantic Referring Expressions for Human-Robot Interaction
Figure 2 for Grounding Spatio-Semantic Referring Expressions for Human-Robot Interaction
Figure 3 for Grounding Spatio-Semantic Referring Expressions for Human-Robot Interaction
Figure 4 for Grounding Spatio-Semantic Referring Expressions for Human-Robot Interaction
Viaarxiv icon