Alert button
Picture for Li Fei-Fei

Li Fei-Fei

Alert button

M-EMBER: Tackling Long-Horizon Mobile Manipulation via Factorized Domain Transfer

Add code
Bookmark button
Alert button
May 23, 2023
Bohan Wu, Roberto Martin-Martin, Li Fei-Fei

Figure 1 for M-EMBER: Tackling Long-Horizon Mobile Manipulation via Factorized Domain Transfer
Figure 2 for M-EMBER: Tackling Long-Horizon Mobile Manipulation via Factorized Domain Transfer
Figure 3 for M-EMBER: Tackling Long-Horizon Mobile Manipulation via Factorized Domain Transfer
Figure 4 for M-EMBER: Tackling Long-Horizon Mobile Manipulation via Factorized Domain Transfer
Viaarxiv icon

MimicPlay: Long-Horizon Imitation Learning by Watching Human Play

Add code
Bookmark button
Alert button
Feb 24, 2023
Chen Wang, Linxi Fan, Jiankai Sun, Ruohan Zhang, Li Fei-Fei, Danfei Xu, Yuke Zhu, Anima Anandkumar

Figure 1 for MimicPlay: Long-Horizon Imitation Learning by Watching Human Play
Figure 2 for MimicPlay: Long-Horizon Imitation Learning by Watching Human Play
Figure 3 for MimicPlay: Long-Horizon Imitation Learning by Watching Human Play
Figure 4 for MimicPlay: Long-Horizon Imitation Learning by Watching Human Play
Viaarxiv icon

See, Hear, and Feel: Smart Sensory Fusion for Robotic Manipulation

Add code
Bookmark button
Alert button
Dec 08, 2022
Hao Li, Yizhi Zhang, Junzhe Zhu, Shaoxiong Wang, Michelle A Lee, Huazhe Xu, Edward Adelson, Li Fei-Fei, Ruohan Gao, Jiajun Wu

Figure 1 for See, Hear, and Feel: Smart Sensory Fusion for Robotic Manipulation
Figure 2 for See, Hear, and Feel: Smart Sensory Fusion for Robotic Manipulation
Figure 3 for See, Hear, and Feel: Smart Sensory Fusion for Robotic Manipulation
Figure 4 for See, Hear, and Feel: Smart Sensory Fusion for Robotic Manipulation
Viaarxiv icon

Active Task Randomization: Learning Visuomotor Skills for Sequential Manipulation by Proposing Feasible and Novel Tasks

Add code
Bookmark button
Alert button
Nov 11, 2022
Kuan Fang, Toki Migimatsu, Ajay Mandlekar, Li Fei-Fei, Jeannette Bohg

Figure 1 for Active Task Randomization: Learning Visuomotor Skills for Sequential Manipulation by Proposing Feasible and Novel Tasks
Figure 2 for Active Task Randomization: Learning Visuomotor Skills for Sequential Manipulation by Proposing Feasible and Novel Tasks
Figure 3 for Active Task Randomization: Learning Visuomotor Skills for Sequential Manipulation by Proposing Feasible and Novel Tasks
Figure 4 for Active Task Randomization: Learning Visuomotor Skills for Sequential Manipulation by Proposing Feasible and Novel Tasks
Viaarxiv icon

Retrospectives on the Embodied AI Workshop

Add code
Bookmark button
Alert button
Oct 17, 2022
Matt Deitke, Dhruv Batra, Yonatan Bisk, Tommaso Campari, Angel X. Chang, Devendra Singh Chaplot, Changan Chen, Claudia Pérez D'Arpino, Kiana Ehsani, Ali Farhadi, Li Fei-Fei, Anthony Francis, Chuang Gan, Kristen Grauman, David Hall, Winson Han, Unnat Jain, Aniruddha Kembhavi, Jacob Krantz, Stefan Lee, Chengshu Li, Sagnik Majumder, Oleksandr Maksymets, Roberto Martín-Martín, Roozbeh Mottaghi, Sonia Raychaudhuri, Mike Roberts, Silvio Savarese, Manolis Savva, Mohit Shridhar, Niko Sünderhauf, Andrew Szot, Ben Talbot, Joshua B. Tenenbaum, Jesse Thomason, Alexander Toshev, Joanne Truong, Luca Weihs, Jiajun Wu

Figure 1 for Retrospectives on the Embodied AI Workshop
Figure 2 for Retrospectives on the Embodied AI Workshop
Figure 3 for Retrospectives on the Embodied AI Workshop
Figure 4 for Retrospectives on the Embodied AI Workshop
Viaarxiv icon

ELIGN: Expectation Alignment as a Multi-Agent Intrinsic Reward

Add code
Bookmark button
Alert button
Oct 09, 2022
Zixian Ma, Rose Wang, Li Fei-Fei, Michael Bernstein, Ranjay Krishna

Figure 1 for ELIGN: Expectation Alignment as a Multi-Agent Intrinsic Reward
Figure 2 for ELIGN: Expectation Alignment as a Multi-Agent Intrinsic Reward
Figure 3 for ELIGN: Expectation Alignment as a Multi-Agent Intrinsic Reward
Figure 4 for ELIGN: Expectation Alignment as a Multi-Agent Intrinsic Reward
Viaarxiv icon

VIMA: General Robot Manipulation with Multimodal Prompts

Add code
Bookmark button
Alert button
Oct 06, 2022
Yunfan Jiang, Agrim Gupta, Zichen Zhang, Guanzhi Wang, Yongqiang Dou, Yanjun Chen, Li Fei-Fei, Anima Anandkumar, Yuke Zhu, Linxi Fan

Figure 1 for VIMA: General Robot Manipulation with Multimodal Prompts
Figure 2 for VIMA: General Robot Manipulation with Multimodal Prompts
Figure 3 for VIMA: General Robot Manipulation with Multimodal Prompts
Figure 4 for VIMA: General Robot Manipulation with Multimodal Prompts
Viaarxiv icon

GaitForeMer: Self-Supervised Pre-Training of Transformers via Human Motion Forecasting for Few-Shot Gait Impairment Severity Estimation

Add code
Bookmark button
Alert button
Jun 30, 2022
Mark Endo, Kathleen L. Poston, Edith V. Sullivan, Li Fei-Fei, Kilian M. Pohl, Ehsan Adeli

Figure 1 for GaitForeMer: Self-Supervised Pre-Training of Transformers via Human Motion Forecasting for Few-Shot Gait Impairment Severity Estimation
Figure 2 for GaitForeMer: Self-Supervised Pre-Training of Transformers via Human Motion Forecasting for Few-Shot Gait Impairment Severity Estimation
Figure 3 for GaitForeMer: Self-Supervised Pre-Training of Transformers via Human Motion Forecasting for Few-Shot Gait Impairment Severity Estimation
Figure 4 for GaitForeMer: Self-Supervised Pre-Training of Transformers via Human Motion Forecasting for Few-Shot Gait Impairment Severity Estimation
Viaarxiv icon

MaskViT: Masked Visual Pre-Training for Video Prediction

Add code
Bookmark button
Alert button
Jun 23, 2022
Agrim Gupta, Stephen Tian, Yunzhi Zhang, Jiajun Wu, Roberto Martín-Martín, Li Fei-Fei

Figure 1 for MaskViT: Masked Visual Pre-Training for Video Prediction
Figure 2 for MaskViT: Masked Visual Pre-Training for Video Prediction
Figure 3 for MaskViT: Masked Visual Pre-Training for Video Prediction
Figure 4 for MaskViT: Masked Visual Pre-Training for Video Prediction
Viaarxiv icon

BEHAVIOR in Habitat 2.0: Simulator-Independent Logical Task Description for Benchmarking Embodied AI Agents

Add code
Bookmark button
Alert button
Jun 13, 2022
Ziang Liu, Roberto Martín-Martín, Fei Xia, Jiajun Wu, Li Fei-Fei

Figure 1 for BEHAVIOR in Habitat 2.0: Simulator-Independent Logical Task Description for Benchmarking Embodied AI Agents
Figure 2 for BEHAVIOR in Habitat 2.0: Simulator-Independent Logical Task Description for Benchmarking Embodied AI Agents
Viaarxiv icon