Picture for Tetsuya Ogata

Tetsuya Ogata

TaSA: Two-Phased Deep Predictive Learning of Tactile Sensory Attenuation for Improving In-Grasp Manipulation

Add code
Feb 05, 2026
Viaarxiv icon

Proprioception Enhances Vision Language Model in Generating Captions and Subtask Segmentations for Robot Task

Add code
Dec 24, 2025
Viaarxiv icon

Input-gated Bilateral Teleoperation: An Easy-to-implement Force Feedback Teleoperation Method for Low-cost Hardware

Add code
Sep 10, 2025
Viaarxiv icon

Close-Fitting Dressing Assistance Based on State Estimation of Feet and Garments with Semantic-based Visual Attention

Add code
May 06, 2025
Viaarxiv icon

Focused Blind Switching Manipulation Based on Constrained and Regional Touch States of Multi-Fingered Hand Using Deep Learning

Add code
Mar 10, 2025
Viaarxiv icon

Visual Imitation Learning of Non-Prehensile Manipulation Tasks with Dynamics-Supervised Models

Add code
Oct 25, 2024
Viaarxiv icon

Achieving Faster and More Accurate Operation of Deep Predictive Learning

Add code
Aug 03, 2024
Viaarxiv icon

Dual-arm Motion Generation for Repositioning Care based on Deep Predictive Learning with Somatosensory Attention Mechanism

Add code
Jul 18, 2024
Viaarxiv icon

Sensorimotor Attention and Language-based Regressions in Shared Latent Variables for Integrating Robot Motion Learning and LLM

Add code
Jul 12, 2024
Viaarxiv icon

A Peg-in-hole Task Strategy for Holes in Concrete

Add code
Mar 29, 2024
Figure 1 for A Peg-in-hole Task Strategy for Holes in Concrete
Figure 2 for A Peg-in-hole Task Strategy for Holes in Concrete
Figure 3 for A Peg-in-hole Task Strategy for Holes in Concrete
Figure 4 for A Peg-in-hole Task Strategy for Holes in Concrete
Viaarxiv icon