Picture for Shubham Tulsiani

Shubham Tulsiani

Track2Act: Predicting Point Tracks from Internet Videos enables Diverse Zero-shot Robot Manipulation

Add code
May 02, 2024
Viaarxiv icon

G-HOP: Generative Hand-Object Prior for Interaction Reconstruction and Grasp Synthesis

Add code
Apr 18, 2024
Figure 1 for G-HOP: Generative Hand-Object Prior for Interaction Reconstruction and Grasp Synthesis
Figure 2 for G-HOP: Generative Hand-Object Prior for Interaction Reconstruction and Grasp Synthesis
Figure 3 for G-HOP: Generative Hand-Object Prior for Interaction Reconstruction and Grasp Synthesis
Figure 4 for G-HOP: Generative Hand-Object Prior for Interaction Reconstruction and Grasp Synthesis
Viaarxiv icon

MVD-Fusion: Single-view 3D via Depth-consistent Multi-view Generation

Add code
Apr 04, 2024
Figure 1 for MVD-Fusion: Single-view 3D via Depth-consistent Multi-view Generation
Figure 2 for MVD-Fusion: Single-view 3D via Depth-consistent Multi-view Generation
Figure 3 for MVD-Fusion: Single-view 3D via Depth-consistent Multi-view Generation
Figure 4 for MVD-Fusion: Single-view 3D via Depth-consistent Multi-view Generation
Viaarxiv icon

Cameras as Rays: Pose Estimation via Ray Diffusion

Add code
Feb 22, 2024
Viaarxiv icon

UpFusion: Novel View Diffusion from Unposed Sparse View Observations

Add code
Jan 04, 2024
Viaarxiv icon

Towards Generalizable Zero-Shot Manipulation via Translating Human Interaction Plans

Add code
Dec 01, 2023
Figure 1 for Towards Generalizable Zero-Shot Manipulation via Translating Human Interaction Plans
Figure 2 for Towards Generalizable Zero-Shot Manipulation via Translating Human Interaction Plans
Figure 3 for Towards Generalizable Zero-Shot Manipulation via Translating Human Interaction Plans
Figure 4 for Towards Generalizable Zero-Shot Manipulation via Translating Human Interaction Plans
Viaarxiv icon

Diffusion-Guided Reconstruction of Everyday Hand-Object Interaction Clips

Add code
Sep 11, 2023
Figure 1 for Diffusion-Guided Reconstruction of Everyday Hand-Object Interaction Clips
Figure 2 for Diffusion-Guided Reconstruction of Everyday Hand-Object Interaction Clips
Figure 3 for Diffusion-Guided Reconstruction of Everyday Hand-Object Interaction Clips
Figure 4 for Diffusion-Guided Reconstruction of Everyday Hand-Object Interaction Clips
Viaarxiv icon

RoboAgent: Generalization and Efficiency in Robot Manipulation via Semantic Augmentations and Action Chunking

Add code
Sep 05, 2023
Figure 1 for RoboAgent: Generalization and Efficiency in Robot Manipulation via Semantic Augmentations and Action Chunking
Figure 2 for RoboAgent: Generalization and Efficiency in Robot Manipulation via Semantic Augmentations and Action Chunking
Figure 3 for RoboAgent: Generalization and Efficiency in Robot Manipulation via Semantic Augmentations and Action Chunking
Figure 4 for RoboAgent: Generalization and Efficiency in Robot Manipulation via Semantic Augmentations and Action Chunking
Viaarxiv icon

Visual Affordance Prediction for Guiding Robot Exploration

Add code
May 28, 2023
Figure 1 for Visual Affordance Prediction for Guiding Robot Exploration
Figure 2 for Visual Affordance Prediction for Guiding Robot Exploration
Figure 3 for Visual Affordance Prediction for Guiding Robot Exploration
Figure 4 for Visual Affordance Prediction for Guiding Robot Exploration
Viaarxiv icon

RelPose++: Recovering 6D Poses from Sparse-view Observations

Add code
May 08, 2023
Figure 1 for RelPose++: Recovering 6D Poses from Sparse-view Observations
Figure 2 for RelPose++: Recovering 6D Poses from Sparse-view Observations
Figure 3 for RelPose++: Recovering 6D Poses from Sparse-view Observations
Figure 4 for RelPose++: Recovering 6D Poses from Sparse-view Observations
Viaarxiv icon