Alert button
Picture for Mohit Shridhar

Mohit Shridhar

Alert button

GenSim: Generating Robotic Simulation Tasks via Large Language Models

Oct 02, 2023
Lirui Wang, Yiyang Ling, Zhecheng Yuan, Mohit Shridhar, Chen Bao, Yuzhe Qin, Bailin Wang, Huazhe Xu, Xiaolong Wang

Figure 1 for GenSim: Generating Robotic Simulation Tasks via Large Language Models
Figure 2 for GenSim: Generating Robotic Simulation Tasks via Large Language Models
Figure 3 for GenSim: Generating Robotic Simulation Tasks via Large Language Models
Figure 4 for GenSim: Generating Robotic Simulation Tasks via Large Language Models
Viaarxiv icon

AR2-D2:Training a Robot Without a Robot

Jun 23, 2023
Jiafei Duan, Yi Ru Wang, Mohit Shridhar, Dieter Fox, Ranjay Krishna

Figure 1 for AR2-D2:Training a Robot Without a Robot
Figure 2 for AR2-D2:Training a Robot Without a Robot
Figure 3 for AR2-D2:Training a Robot Without a Robot
Figure 4 for AR2-D2:Training a Robot Without a Robot
Viaarxiv icon

Retrospectives on the Embodied AI Workshop

Oct 17, 2022
Matt Deitke, Dhruv Batra, Yonatan Bisk, Tommaso Campari, Angel X. Chang, Devendra Singh Chaplot, Changan Chen, Claudia Pérez D'Arpino, Kiana Ehsani, Ali Farhadi, Li Fei-Fei, Anthony Francis, Chuang Gan, Kristen Grauman, David Hall, Winson Han, Unnat Jain, Aniruddha Kembhavi, Jacob Krantz, Stefan Lee, Chengshu Li, Sagnik Majumder, Oleksandr Maksymets, Roberto Martín-Martín, Roozbeh Mottaghi, Sonia Raychaudhuri, Mike Roberts, Silvio Savarese, Manolis Savva, Mohit Shridhar, Niko Sünderhauf, Andrew Szot, Ben Talbot, Joshua B. Tenenbaum, Jesse Thomason, Alexander Toshev, Joanne Truong, Luca Weihs, Jiajun Wu

Figure 1 for Retrospectives on the Embodied AI Workshop
Figure 2 for Retrospectives on the Embodied AI Workshop
Figure 3 for Retrospectives on the Embodied AI Workshop
Figure 4 for Retrospectives on the Embodied AI Workshop
Viaarxiv icon

Perceiver-Actor: A Multi-Task Transformer for Robotic Manipulation

Sep 12, 2022
Mohit Shridhar, Lucas Manuelli, Dieter Fox

Figure 1 for Perceiver-Actor: A Multi-Task Transformer for Robotic Manipulation
Figure 2 for Perceiver-Actor: A Multi-Task Transformer for Robotic Manipulation
Figure 3 for Perceiver-Actor: A Multi-Task Transformer for Robotic Manipulation
Figure 4 for Perceiver-Actor: A Multi-Task Transformer for Robotic Manipulation
Viaarxiv icon

CLIPort: What and Where Pathways for Robotic Manipulation

Sep 24, 2021
Mohit Shridhar, Lucas Manuelli, Dieter Fox

Figure 1 for CLIPort: What and Where Pathways for Robotic Manipulation
Figure 2 for CLIPort: What and Where Pathways for Robotic Manipulation
Figure 3 for CLIPort: What and Where Pathways for Robotic Manipulation
Figure 4 for CLIPort: What and Where Pathways for Robotic Manipulation
Viaarxiv icon

Language Grounding with 3D Objects

Jul 26, 2021
Jesse Thomason, Mohit Shridhar, Yonatan Bisk, Chris Paxton, Luke Zettlemoyer

Figure 1 for Language Grounding with 3D Objects
Figure 2 for Language Grounding with 3D Objects
Figure 3 for Language Grounding with 3D Objects
Figure 4 for Language Grounding with 3D Objects
Viaarxiv icon

ALFWorld: Aligning Text and Embodied Environments for Interactive Learning

Oct 08, 2020
Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Côté, Yonatan Bisk, Adam Trischler, Matthew Hausknecht

Figure 1 for ALFWorld: Aligning Text and Embodied Environments for Interactive Learning
Figure 2 for ALFWorld: Aligning Text and Embodied Environments for Interactive Learning
Figure 3 for ALFWorld: Aligning Text and Embodied Environments for Interactive Learning
Figure 4 for ALFWorld: Aligning Text and Embodied Environments for Interactive Learning
Viaarxiv icon

ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks

Dec 03, 2019
Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, Dieter Fox

Figure 1 for ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
Figure 2 for ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
Figure 3 for ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
Figure 4 for ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
Viaarxiv icon

Interactive Visual Grounding of Referring Expressions for Human-Robot Interaction

Jun 11, 2018
Mohit Shridhar, David Hsu

Figure 1 for Interactive Visual Grounding of Referring Expressions for Human-Robot Interaction
Figure 2 for Interactive Visual Grounding of Referring Expressions for Human-Robot Interaction
Figure 3 for Interactive Visual Grounding of Referring Expressions for Human-Robot Interaction
Figure 4 for Interactive Visual Grounding of Referring Expressions for Human-Robot Interaction
Viaarxiv icon

Grounding Spatio-Semantic Referring Expressions for Human-Robot Interaction

Jul 18, 2017
Mohit Shridhar, David Hsu

Figure 1 for Grounding Spatio-Semantic Referring Expressions for Human-Robot Interaction
Figure 2 for Grounding Spatio-Semantic Referring Expressions for Human-Robot Interaction
Figure 3 for Grounding Spatio-Semantic Referring Expressions for Human-Robot Interaction
Figure 4 for Grounding Spatio-Semantic Referring Expressions for Human-Robot Interaction
Viaarxiv icon