Picture for Rong Xiong

Rong Xiong

Learning A Simulation-based Visual Policy for Real-world Peg In Unseen Holes

Add code
May 09, 2022
Figure 1 for Learning A Simulation-based Visual Policy for Real-world Peg In Unseen Holes
Figure 2 for Learning A Simulation-based Visual Policy for Real-world Peg In Unseen Holes
Figure 3 for Learning A Simulation-based Visual Policy for Real-world Peg In Unseen Holes
Figure 4 for Learning A Simulation-based Visual Policy for Real-world Peg In Unseen Holes
Viaarxiv icon

Map-based Visual-Inertial Localization: Consistency and Complexity

Add code
Apr 26, 2022
Figure 1 for Map-based Visual-Inertial Localization: Consistency and Complexity
Figure 2 for Map-based Visual-Inertial Localization: Consistency and Complexity
Figure 3 for Map-based Visual-Inertial Localization: Consistency and Complexity
Figure 4 for Map-based Visual-Inertial Localization: Consistency and Complexity
Viaarxiv icon

Toward Consistent and Efficient Map-based Visual-inertial Localization: Theory Framework and Filter Design

Add code
Apr 26, 2022
Figure 1 for Toward Consistent and Efficient Map-based Visual-inertial Localization: Theory Framework and Filter Design
Figure 2 for Toward Consistent and Efficient Map-based Visual-inertial Localization: Theory Framework and Filter Design
Figure 3 for Toward Consistent and Efficient Map-based Visual-inertial Localization: Theory Framework and Filter Design
Figure 4 for Toward Consistent and Efficient Map-based Visual-inertial Localization: Theory Framework and Filter Design
Viaarxiv icon

Learning to Fill the Seam by Vision: Sub-millimeter Peg-in-hole on Unseen Shapes in Real World

Add code
Apr 20, 2022
Figure 1 for Learning to Fill the Seam by Vision: Sub-millimeter Peg-in-hole on Unseen Shapes in Real World
Figure 2 for Learning to Fill the Seam by Vision: Sub-millimeter Peg-in-hole on Unseen Shapes in Real World
Figure 3 for Learning to Fill the Seam by Vision: Sub-millimeter Peg-in-hole on Unseen Shapes in Real World
Figure 4 for Learning to Fill the Seam by Vision: Sub-millimeter Peg-in-hole on Unseen Shapes in Real World
Viaarxiv icon

One RING to Rule Them All: Radon Sinogram for Place Recognition, Orientation and Translation Estimation

Add code
Apr 17, 2022
Figure 1 for One RING to Rule Them All: Radon Sinogram for Place Recognition, Orientation and Translation Estimation
Figure 2 for One RING to Rule Them All: Radon Sinogram for Place Recognition, Orientation and Translation Estimation
Figure 3 for One RING to Rule Them All: Radon Sinogram for Place Recognition, Orientation and Translation Estimation
Figure 4 for One RING to Rule Them All: Radon Sinogram for Place Recognition, Orientation and Translation Estimation
Viaarxiv icon

A Visual Navigation Perspective for Category-Level Object Pose Estimation

Add code
Mar 25, 2022
Figure 1 for A Visual Navigation Perspective for Category-Level Object Pose Estimation
Figure 2 for A Visual Navigation Perspective for Category-Level Object Pose Estimation
Figure 3 for A Visual Navigation Perspective for Category-Level Object Pose Estimation
Figure 4 for A Visual Navigation Perspective for Category-Level Object Pose Estimation
Viaarxiv icon

DXQ-Net: Differentiable LiDAR-Camera Extrinsic Calibration Using Quality-aware Flow

Add code
Mar 17, 2022
Figure 1 for DXQ-Net: Differentiable LiDAR-Camera Extrinsic Calibration Using Quality-aware Flow
Figure 2 for DXQ-Net: Differentiable LiDAR-Camera Extrinsic Calibration Using Quality-aware Flow
Figure 3 for DXQ-Net: Differentiable LiDAR-Camera Extrinsic Calibration Using Quality-aware Flow
Figure 4 for DXQ-Net: Differentiable LiDAR-Camera Extrinsic Calibration Using Quality-aware Flow
Viaarxiv icon

Least Square Estimation Network for Depth Completion

Add code
Mar 07, 2022
Figure 1 for Least Square Estimation Network for Depth Completion
Figure 2 for Least Square Estimation Network for Depth Completion
Figure 3 for Least Square Estimation Network for Depth Completion
Figure 4 for Least Square Estimation Network for Depth Completion
Viaarxiv icon

Translation Invariant Global Estimation of Heading Angle Using Sinogram of LiDAR Point Cloud

Add code
Mar 02, 2022
Figure 1 for Translation Invariant Global Estimation of Heading Angle Using Sinogram of LiDAR Point Cloud
Figure 2 for Translation Invariant Global Estimation of Heading Angle Using Sinogram of LiDAR Point Cloud
Figure 3 for Translation Invariant Global Estimation of Heading Angle Using Sinogram of LiDAR Point Cloud
Figure 4 for Translation Invariant Global Estimation of Heading Angle Using Sinogram of LiDAR Point Cloud
Viaarxiv icon

Electric Vehicle Automatic Charging System Based on Vision-force Fusion

Add code
Oct 18, 2021
Figure 1 for Electric Vehicle Automatic Charging System Based on Vision-force Fusion
Figure 2 for Electric Vehicle Automatic Charging System Based on Vision-force Fusion
Figure 3 for Electric Vehicle Automatic Charging System Based on Vision-force Fusion
Figure 4 for Electric Vehicle Automatic Charging System Based on Vision-force Fusion
Viaarxiv icon