Picture for Kazuhiro Shintani

Kazuhiro Shintani

Where Do We Look When We Teach? Analyzing Human Gaze Behavior Across Demonstration Devices in Robot Imitation Learning

Add code
Jun 06, 2025
Viaarxiv icon

CLIP-Clique: Graph-based Correspondence Matching Augmented by Vision Language Models for Object-based Global Localization

Add code
Oct 04, 2024
Figure 1 for CLIP-Clique: Graph-based Correspondence Matching Augmented by Vision Language Models for Object-based Global Localization
Figure 2 for CLIP-Clique: Graph-based Correspondence Matching Augmented by Vision Language Models for Object-based Global Localization
Figure 3 for CLIP-Clique: Graph-based Correspondence Matching Augmented by Vision Language Models for Object-based Global Localization
Figure 4 for CLIP-Clique: Graph-based Correspondence Matching Augmented by Vision Language Models for Object-based Global Localization
Viaarxiv icon

Robust Imitation Learning for Mobile Manipulator Focusing on Task-Related Viewpoints and Regions

Add code
Oct 02, 2024
Figure 1 for Robust Imitation Learning for Mobile Manipulator Focusing on Task-Related Viewpoints and Regions
Figure 2 for Robust Imitation Learning for Mobile Manipulator Focusing on Task-Related Viewpoints and Regions
Figure 3 for Robust Imitation Learning for Mobile Manipulator Focusing on Task-Related Viewpoints and Regions
Figure 4 for Robust Imitation Learning for Mobile Manipulator Focusing on Task-Related Viewpoints and Regions
Viaarxiv icon

Self-Supervised Geometry-Guided Initialization for Robust Monocular Visual Odometry

Add code
Jun 03, 2024
Viaarxiv icon

CLIP-Loc: Multi-modal Landmark Association for Global Localization in Object-based Maps

Add code
Feb 08, 2024
Viaarxiv icon