Picture for Xiaotong Chen

Xiaotong Chen

VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation

Add code
Jun 17, 2022
Figure 1 for VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation
Figure 2 for VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation
Figure 3 for VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation
Figure 4 for VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation
Viaarxiv icon

ClearPose: Large-scale Transparent Object Dataset and Benchmark

Add code
Mar 08, 2022
Figure 1 for ClearPose: Large-scale Transparent Object Dataset and Benchmark
Figure 2 for ClearPose: Large-scale Transparent Object Dataset and Benchmark
Figure 3 for ClearPose: Large-scale Transparent Object Dataset and Benchmark
Figure 4 for ClearPose: Large-scale Transparent Object Dataset and Benchmark
Viaarxiv icon

ProgressLabeller: Visual Data Stream Annotation for Training Object-Centric 3D Perception

Add code
Mar 01, 2022
Figure 1 for ProgressLabeller: Visual Data Stream Annotation for Training Object-Centric 3D Perception
Figure 2 for ProgressLabeller: Visual Data Stream Annotation for Training Object-Centric 3D Perception
Figure 3 for ProgressLabeller: Visual Data Stream Annotation for Training Object-Centric 3D Perception
Figure 4 for ProgressLabeller: Visual Data Stream Annotation for Training Object-Centric 3D Perception
Viaarxiv icon

PatchTrack: Multiple Object Tracking Using Frame Patches

Add code
Jan 01, 2022
Figure 1 for PatchTrack: Multiple Object Tracking Using Frame Patches
Figure 2 for PatchTrack: Multiple Object Tracking Using Frame Patches
Figure 3 for PatchTrack: Multiple Object Tracking Using Frame Patches
Figure 4 for PatchTrack: Multiple Object Tracking Using Frame Patches
Viaarxiv icon

Manipulation-Oriented Object Perception in Clutter through Affordance Coordinate Frames

Add code
Oct 16, 2020
Figure 1 for Manipulation-Oriented Object Perception in Clutter through Affordance Coordinate Frames
Figure 2 for Manipulation-Oriented Object Perception in Clutter through Affordance Coordinate Frames
Figure 3 for Manipulation-Oriented Object Perception in Clutter through Affordance Coordinate Frames
Figure 4 for Manipulation-Oriented Object Perception in Clutter through Affordance Coordinate Frames
Viaarxiv icon

Design, Control, and Applications of a Soft Robotic Arm

Add code
Jul 08, 2020
Figure 1 for Design, Control, and Applications of a Soft Robotic Arm
Figure 2 for Design, Control, and Applications of a Soft Robotic Arm
Figure 3 for Design, Control, and Applications of a Soft Robotic Arm
Figure 4 for Design, Control, and Applications of a Soft Robotic Arm
Viaarxiv icon

LIT: Light-field Inference of Transparency for Refractive Object Localization

Add code
Oct 24, 2019
Figure 1 for LIT: Light-field Inference of Transparency for Refractive Object Localization
Figure 2 for LIT: Light-field Inference of Transparency for Refractive Object Localization
Figure 3 for LIT: Light-field Inference of Transparency for Refractive Object Localization
Figure 4 for LIT: Light-field Inference of Transparency for Refractive Object Localization
Viaarxiv icon

GRIP: Generative Robust Inference and Perception for Semantic Robot Manipulation in Adversarial Environments

Add code
Mar 20, 2019
Figure 1 for GRIP: Generative Robust Inference and Perception for Semantic Robot Manipulation in Adversarial Environments
Figure 2 for GRIP: Generative Robust Inference and Perception for Semantic Robot Manipulation in Adversarial Environments
Figure 3 for GRIP: Generative Robust Inference and Perception for Semantic Robot Manipulation in Adversarial Environments
Figure 4 for GRIP: Generative Robust Inference and Perception for Semantic Robot Manipulation in Adversarial Environments
Viaarxiv icon