Picture for Andy Zeng

Andy Zeng

Hybrid Random Features

Add code
Oct 13, 2021
Figure 1 for Hybrid Random Features
Figure 2 for Hybrid Random Features
Figure 3 for Hybrid Random Features
Figure 4 for Hybrid Random Features
Viaarxiv icon

Multi-Task Learning with Sequence-Conditioned Transporter Networks

Add code
Sep 15, 2021
Figure 1 for Multi-Task Learning with Sequence-Conditioned Transporter Networks
Figure 2 for Multi-Task Learning with Sequence-Conditioned Transporter Networks
Figure 3 for Multi-Task Learning with Sequence-Conditioned Transporter Networks
Figure 4 for Multi-Task Learning with Sequence-Conditioned Transporter Networks
Viaarxiv icon

Implicit Behavioral Cloning

Add code
Sep 01, 2021
Figure 1 for Implicit Behavioral Cloning
Figure 2 for Implicit Behavioral Cloning
Figure 3 for Implicit Behavioral Cloning
Figure 4 for Implicit Behavioral Cloning
Viaarxiv icon

Learning to See before Learning to Act: Visual Pre-training for Manipulation

Add code
Jul 01, 2021
Figure 1 for Learning to See before Learning to Act: Visual Pre-training for Manipulation
Figure 2 for Learning to See before Learning to Act: Visual Pre-training for Manipulation
Figure 3 for Learning to See before Learning to Act: Visual Pre-training for Manipulation
Figure 4 for Learning to See before Learning to Act: Visual Pre-training for Manipulation
Viaarxiv icon

XIRL: Cross-embodiment Inverse Reinforcement Learning

Add code
Jun 07, 2021
Figure 1 for XIRL: Cross-embodiment Inverse Reinforcement Learning
Figure 2 for XIRL: Cross-embodiment Inverse Reinforcement Learning
Figure 3 for XIRL: Cross-embodiment Inverse Reinforcement Learning
Figure 4 for XIRL: Cross-embodiment Inverse Reinforcement Learning
Viaarxiv icon

Spatial Intention Maps for Multi-Agent Mobile Manipulation

Add code
Mar 23, 2021
Figure 1 for Spatial Intention Maps for Multi-Agent Mobile Manipulation
Figure 2 for Spatial Intention Maps for Multi-Agent Mobile Manipulation
Figure 3 for Spatial Intention Maps for Multi-Agent Mobile Manipulation
Figure 4 for Spatial Intention Maps for Multi-Agent Mobile Manipulation
Viaarxiv icon

Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks

Add code
Dec 18, 2020
Figure 1 for Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks
Figure 2 for Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks
Figure 3 for Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks
Figure 4 for Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks
Viaarxiv icon

Transporter Networks: Rearranging the Visual World for Robotic Manipulation

Add code
Oct 27, 2020
Figure 1 for Transporter Networks: Rearranging the Visual World for Robotic Manipulation
Figure 2 for Transporter Networks: Rearranging the Visual World for Robotic Manipulation
Figure 3 for Transporter Networks: Rearranging the Visual World for Robotic Manipulation
Figure 4 for Transporter Networks: Rearranging the Visual World for Robotic Manipulation
Viaarxiv icon

Spatial Action Maps for Mobile Manipulation

Add code
Apr 20, 2020
Figure 1 for Spatial Action Maps for Mobile Manipulation
Figure 2 for Spatial Action Maps for Mobile Manipulation
Figure 3 for Spatial Action Maps for Mobile Manipulation
Figure 4 for Spatial Action Maps for Mobile Manipulation
Viaarxiv icon

Grasping in the Wild:Learning 6DoF Closed-Loop Grasping from Low-Cost Demonstrations

Add code
Dec 09, 2019
Figure 1 for Grasping in the Wild:Learning 6DoF Closed-Loop Grasping from Low-Cost Demonstrations
Figure 2 for Grasping in the Wild:Learning 6DoF Closed-Loop Grasping from Low-Cost Demonstrations
Figure 3 for Grasping in the Wild:Learning 6DoF Closed-Loop Grasping from Low-Cost Demonstrations
Figure 4 for Grasping in the Wild:Learning 6DoF Closed-Loop Grasping from Low-Cost Demonstrations
Viaarxiv icon