Picture for Yuchen Cui

Yuchen Cui

Distilling and Retrieving Generalizable Knowledge for Robot Manipulation via Language Corrections

Add code
Nov 17, 2023
Viaarxiv icon

Open X-Embodiment: Robotic Learning Datasets and RT-X Models

Add code
Oct 17, 2023
Figure 1 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 2 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 3 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 4 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Viaarxiv icon

Gesture-Informed Robot Assistance via Foundation Models

Add code
Sep 07, 2023
Figure 1 for Gesture-Informed Robot Assistance via Foundation Models
Figure 2 for Gesture-Informed Robot Assistance via Foundation Models
Figure 3 for Gesture-Informed Robot Assistance via Foundation Models
Figure 4 for Gesture-Informed Robot Assistance via Foundation Models
Viaarxiv icon

HYDRA: Hybrid Robot Actions for Imitation Learning

Add code
Jun 29, 2023
Figure 1 for HYDRA: Hybrid Robot Actions for Imitation Learning
Figure 2 for HYDRA: Hybrid Robot Actions for Imitation Learning
Figure 3 for HYDRA: Hybrid Robot Actions for Imitation Learning
Figure 4 for HYDRA: Hybrid Robot Actions for Imitation Learning
Viaarxiv icon

Data Quality in Imitation Learning

Add code
Jun 04, 2023
Figure 1 for Data Quality in Imitation Learning
Figure 2 for Data Quality in Imitation Learning
Figure 3 for Data Quality in Imitation Learning
Figure 4 for Data Quality in Imitation Learning
Viaarxiv icon

"No, to the Right" -- Online Language Corrections for Robotic Manipulation via Shared Autonomy

Add code
Jan 06, 2023
Figure 1 for "No, to the Right" -- Online Language Corrections for Robotic Manipulation via Shared Autonomy
Figure 2 for "No, to the Right" -- Online Language Corrections for Robotic Manipulation via Shared Autonomy
Figure 3 for "No, to the Right" -- Online Language Corrections for Robotic Manipulation via Shared Autonomy
Figure 4 for "No, to the Right" -- Online Language Corrections for Robotic Manipulation via Shared Autonomy
Viaarxiv icon

Masked Imitation Learning: Discovering Environment-Invariant Modalities in Multimodal Demonstrations

Add code
Sep 16, 2022
Figure 1 for Masked Imitation Learning: Discovering Environment-Invariant Modalities in Multimodal Demonstrations
Figure 2 for Masked Imitation Learning: Discovering Environment-Invariant Modalities in Multimodal Demonstrations
Figure 3 for Masked Imitation Learning: Discovering Environment-Invariant Modalities in Multimodal Demonstrations
Figure 4 for Masked Imitation Learning: Discovering Environment-Invariant Modalities in Multimodal Demonstrations
Viaarxiv icon

Can Foundation Models Perform Zero-Shot Task Specification For Robot Manipulation?

Add code
Apr 23, 2022
Figure 1 for Can Foundation Models Perform Zero-Shot Task Specification For Robot Manipulation?
Figure 2 for Can Foundation Models Perform Zero-Shot Task Specification For Robot Manipulation?
Figure 3 for Can Foundation Models Perform Zero-Shot Task Specification For Robot Manipulation?
Figure 4 for Can Foundation Models Perform Zero-Shot Task Specification For Robot Manipulation?
Viaarxiv icon

The EMPATHIC Framework for Task Learning from Implicit Human Feedback

Add code
Sep 28, 2020
Figure 1 for The EMPATHIC Framework for Task Learning from Implicit Human Feedback
Figure 2 for The EMPATHIC Framework for Task Learning from Implicit Human Feedback
Figure 3 for The EMPATHIC Framework for Task Learning from Implicit Human Feedback
Figure 4 for The EMPATHIC Framework for Task Learning from Implicit Human Feedback
Viaarxiv icon

Uncertainty-Aware Data Aggregation for Deep Imitation Learning

Add code
May 07, 2019
Figure 1 for Uncertainty-Aware Data Aggregation for Deep Imitation Learning
Figure 2 for Uncertainty-Aware Data Aggregation for Deep Imitation Learning
Figure 3 for Uncertainty-Aware Data Aggregation for Deep Imitation Learning
Figure 4 for Uncertainty-Aware Data Aggregation for Deep Imitation Learning
Viaarxiv icon