Picture for Ajay Mandlekar

Ajay Mandlekar

SkillMimicGen: Automated Demonstration Generation for Efficient Skill Learning and Deployment

Add code
Oct 24, 2024
Figure 1 for SkillMimicGen: Automated Demonstration Generation for Efficient Skill Learning and Deployment
Figure 2 for SkillMimicGen: Automated Demonstration Generation for Efficient Skill Learning and Deployment
Figure 3 for SkillMimicGen: Automated Demonstration Generation for Efficient Skill Learning and Deployment
Figure 4 for SkillMimicGen: Automated Demonstration Generation for Efficient Skill Learning and Deployment
Viaarxiv icon

SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation

Add code
Oct 23, 2024
Figure 1 for SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation
Figure 2 for SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation
Figure 3 for SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation
Figure 4 for SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation
Viaarxiv icon

Latent Action Pretraining from Videos

Add code
Oct 15, 2024
Figure 1 for Latent Action Pretraining from Videos
Figure 2 for Latent Action Pretraining from Videos
Figure 3 for Latent Action Pretraining from Videos
Figure 4 for Latent Action Pretraining from Videos
Viaarxiv icon

AHA: A Vision-Language-Model for Detecting and Reasoning Over Failures in Robotic Manipulation

Add code
Oct 01, 2024
Figure 1 for AHA: A Vision-Language-Model for Detecting and Reasoning Over Failures in Robotic Manipulation
Figure 2 for AHA: A Vision-Language-Model for Detecting and Reasoning Over Failures in Robotic Manipulation
Figure 3 for AHA: A Vision-Language-Model for Detecting and Reasoning Over Failures in Robotic Manipulation
Figure 4 for AHA: A Vision-Language-Model for Detecting and Reasoning Over Failures in Robotic Manipulation
Viaarxiv icon

Deep Generative Models in Robotics: A Survey on Learning from Multimodal Demonstrations

Add code
Aug 08, 2024
Figure 1 for Deep Generative Models in Robotics: A Survey on Learning from Multimodal Demonstrations
Figure 2 for Deep Generative Models in Robotics: A Survey on Learning from Multimodal Demonstrations
Figure 3 for Deep Generative Models in Robotics: A Survey on Learning from Multimodal Demonstrations
Figure 4 for Deep Generative Models in Robotics: A Survey on Learning from Multimodal Demonstrations
Viaarxiv icon

RoboCasa: Large-Scale Simulation of Everyday Tasks for Generalist Robots

Add code
Jun 04, 2024
Viaarxiv icon

IntervenGen: Interventional Data Generation for Robust and Data-Efficient Robot Imitation Learning

Add code
May 02, 2024
Viaarxiv icon

Signatures Meet Dynamic Programming: Generalizing Bellman Equations for Trajectory Following

Add code
Dec 09, 2023
Viaarxiv icon

NOD-TAMP: Multi-Step Manipulation Planning with Neural Object Descriptors

Add code
Nov 02, 2023
Figure 1 for NOD-TAMP: Multi-Step Manipulation Planning with Neural Object Descriptors
Figure 2 for NOD-TAMP: Multi-Step Manipulation Planning with Neural Object Descriptors
Figure 3 for NOD-TAMP: Multi-Step Manipulation Planning with Neural Object Descriptors
Figure 4 for NOD-TAMP: Multi-Step Manipulation Planning with Neural Object Descriptors
Viaarxiv icon

MimicGen: A Data Generation System for Scalable Robot Learning using Human Demonstrations

Add code
Oct 26, 2023
Figure 1 for MimicGen: A Data Generation System for Scalable Robot Learning using Human Demonstrations
Figure 2 for MimicGen: A Data Generation System for Scalable Robot Learning using Human Demonstrations
Figure 3 for MimicGen: A Data Generation System for Scalable Robot Learning using Human Demonstrations
Figure 4 for MimicGen: A Data Generation System for Scalable Robot Learning using Human Demonstrations
Viaarxiv icon