Alert button
Picture for Yuke Zhu

Yuke Zhu

Alert button

Robot Learning on the Job: Human-in-the-Loop Autonomy and Learning During Deployment

Add code
Bookmark button
Alert button
Nov 15, 2022
Huihan Liu, Soroush Nasiriany, Lance Zhang, Zhiyao Bao, Yuke Zhu

Figure 1 for Robot Learning on the Job: Human-in-the-Loop Autonomy and Learning During Deployment
Figure 2 for Robot Learning on the Job: Human-in-the-Loop Autonomy and Learning During Deployment
Figure 3 for Robot Learning on the Job: Human-in-the-Loop Autonomy and Learning During Deployment
Figure 4 for Robot Learning on the Job: Human-in-the-Loop Autonomy and Learning During Deployment
Viaarxiv icon

Learning and Retrieval from Prior Data for Skill-based Imitation Learning

Add code
Bookmark button
Alert button
Oct 20, 2022
Soroush Nasiriany, Tian Gao, Ajay Mandlekar, Yuke Zhu

Figure 1 for Learning and Retrieval from Prior Data for Skill-based Imitation Learning
Figure 2 for Learning and Retrieval from Prior Data for Skill-based Imitation Learning
Figure 3 for Learning and Retrieval from Prior Data for Skill-based Imitation Learning
Figure 4 for Learning and Retrieval from Prior Data for Skill-based Imitation Learning
Viaarxiv icon

VIOLA: Imitation Learning for Vision-Based Manipulation with Object Proposal Priors

Add code
Bookmark button
Alert button
Oct 20, 2022
Yifeng Zhu, Abhishek Joshi, Peter Stone, Yuke Zhu

Figure 1 for VIOLA: Imitation Learning for Vision-Based Manipulation with Object Proposal Priors
Figure 2 for VIOLA: Imitation Learning for Vision-Based Manipulation with Object Proposal Priors
Figure 3 for VIOLA: Imitation Learning for Vision-Based Manipulation with Object Proposal Priors
Figure 4 for VIOLA: Imitation Learning for Vision-Based Manipulation with Object Proposal Priors
Viaarxiv icon

VIMA: General Robot Manipulation with Multimodal Prompts

Add code
Bookmark button
Alert button
Oct 06, 2022
Yunfan Jiang, Agrim Gupta, Zichen Zhang, Guanzhi Wang, Yongqiang Dou, Yanjun Chen, Li Fei-Fei, Anima Anandkumar, Yuke Zhu, Linxi Fan

Figure 1 for VIMA: General Robot Manipulation with Multimodal Prompts
Figure 2 for VIMA: General Robot Manipulation with Multimodal Prompts
Figure 3 for VIMA: General Robot Manipulation with Multimodal Prompts
Figure 4 for VIMA: General Robot Manipulation with Multimodal Prompts
Viaarxiv icon

Learning to Walk by Steering: Perceptive Quadrupedal Locomotion in Dynamic Environments

Add code
Bookmark button
Alert button
Sep 19, 2022
Mingyo Seo, Ryan Gupta, Yifeng Zhu, Alexy Skoutnev, Luis Sentis, Yuke Zhu

Figure 1 for Learning to Walk by Steering: Perceptive Quadrupedal Locomotion in Dynamic Environments
Figure 2 for Learning to Walk by Steering: Perceptive Quadrupedal Locomotion in Dynamic Environments
Figure 3 for Learning to Walk by Steering: Perceptive Quadrupedal Locomotion in Dynamic Environments
Figure 4 for Learning to Walk by Steering: Perceptive Quadrupedal Locomotion in Dynamic Environments
Viaarxiv icon

Causal Dynamics Learning for Task-Independent State Abstraction

Add code
Bookmark button
Alert button
Jun 27, 2022
Zizhao Wang, Xuesu Xiao, Zifan Xu, Yuke Zhu, Peter Stone

Figure 1 for Causal Dynamics Learning for Task-Independent State Abstraction
Figure 2 for Causal Dynamics Learning for Task-Independent State Abstraction
Figure 3 for Causal Dynamics Learning for Task-Independent State Abstraction
Figure 4 for Causal Dynamics Learning for Task-Independent State Abstraction
Viaarxiv icon

MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge

Add code
Bookmark button
Alert button
Jun 17, 2022
Linxi Fan, Guanzhi Wang, Yunfan Jiang, Ajay Mandlekar, Yuncong Yang, Haoyi Zhu, Andrew Tang, De-An Huang, Yuke Zhu, Anima Anandkumar

Figure 1 for MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge
Figure 2 for MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge
Figure 3 for MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge
Figure 4 for MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge
Viaarxiv icon

Bongard-HOI: Benchmarking Few-Shot Visual Reasoning for Human-Object Interactions

Add code
Bookmark button
Alert button
May 27, 2022
Huaizu Jiang, Xiaojian Ma, Weili Nie, Zhiding Yu, Yuke Zhu, Anima Anandkumar

Figure 1 for Bongard-HOI: Benchmarking Few-Shot Visual Reasoning for Human-Object Interactions
Figure 2 for Bongard-HOI: Benchmarking Few-Shot Visual Reasoning for Human-Object Interactions
Figure 3 for Bongard-HOI: Benchmarking Few-Shot Visual Reasoning for Human-Object Interactions
Figure 4 for Bongard-HOI: Benchmarking Few-Shot Visual Reasoning for Human-Object Interactions
Viaarxiv icon

COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked Vehicles

Add code
Bookmark button
Alert button
May 04, 2022
Jiaxun Cui, Hang Qiu, Dian Chen, Peter Stone, Yuke Zhu

Figure 1 for COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked Vehicles
Figure 2 for COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked Vehicles
Figure 3 for COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked Vehicles
Figure 4 for COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked Vehicles
Viaarxiv icon

RelViT: Concept-guided Vision Transformer for Visual Relational Reasoning

Add code
Bookmark button
Alert button
Apr 24, 2022
Xiaojian Ma, Weili Nie, Zhiding Yu, Huaizu Jiang, Chaowei Xiao, Yuke Zhu, Song-Chun Zhu, Anima Anandkumar

Figure 1 for RelViT: Concept-guided Vision Transformer for Visual Relational Reasoning
Figure 2 for RelViT: Concept-guided Vision Transformer for Visual Relational Reasoning
Figure 3 for RelViT: Concept-guided Vision Transformer for Visual Relational Reasoning
Figure 4 for RelViT: Concept-guided Vision Transformer for Visual Relational Reasoning
Viaarxiv icon