Alert button
Picture for Tetsuya Ogata

Tetsuya Ogata

Alert button

A Peg-in-hole Task Strategy for Holes in Concrete

Add code
Bookmark button
Alert button
Mar 29, 2024
André Yuji Yasutomi, Hiroki Mori, Tetsuya Ogata

Figure 1 for A Peg-in-hole Task Strategy for Holes in Concrete
Figure 2 for A Peg-in-hole Task Strategy for Holes in Concrete
Figure 3 for A Peg-in-hole Task Strategy for Holes in Concrete
Figure 4 for A Peg-in-hole Task Strategy for Holes in Concrete
Viaarxiv icon

Visual Spatial Attention and Proprioceptive Data-Driven Reinforcement Learning for Robust Peg-in-Hole Task Under Variable Conditions

Add code
Bookmark button
Alert button
Dec 27, 2023
André Yuji Yasutomi, Hideyuki Ichiwara, Hiroshi Ito, Hiroki Mori, Tetsuya Ogata

Viaarxiv icon

Realtime Motion Generation with Active Perception Using Attention Mechanism for Cooking Robot

Add code
Bookmark button
Alert button
Sep 26, 2023
Namiko Saito, Mayu Hiramoto, Ayuna Kubo, Kanata Suzuki, Hiroshi Ito, Shigeki Sugano, Tetsuya Ogata

Viaarxiv icon

Real-time Motion Generation and Data Augmentation for Grasping Moving Objects with Dynamic Speed and Position Changes

Add code
Bookmark button
Alert button
Sep 22, 2023
Kenjiro Yamamoto, Hiroshi Ito, Hideyuki Ichiwara, Hiroki Mori, Tetsuya Ogata

Figure 1 for Real-time Motion Generation and Data Augmentation for Grasping Moving Objects with Dynamic Speed and Position Changes
Figure 2 for Real-time Motion Generation and Data Augmentation for Grasping Moving Objects with Dynamic Speed and Position Changes
Figure 3 for Real-time Motion Generation and Data Augmentation for Grasping Moving Objects with Dynamic Speed and Position Changes
Figure 4 for Real-time Motion Generation and Data Augmentation for Grasping Moving Objects with Dynamic Speed and Position Changes
Viaarxiv icon

Interactively Robot Action Planning with Uncertainty Analysis and Active Questioning by Large Language Model

Add code
Bookmark button
Alert button
Aug 30, 2023
Kazuki Hori, Kanata Suzuki, Tetsuya Ogata

Viaarxiv icon

Deep Predictive Learning : Motion Learning Concept inspired by Cognitive Robotics

Add code
Bookmark button
Alert button
Jun 26, 2023
Kanata Suzuki, Hiroshi Ito, Tatsuro Yamada, Kei Kase, Tetsuya Ogata

Figure 1 for Deep Predictive Learning : Motion Learning Concept inspired by Cognitive Robotics
Figure 2 for Deep Predictive Learning : Motion Learning Concept inspired by Cognitive Robotics
Figure 3 for Deep Predictive Learning : Motion Learning Concept inspired by Cognitive Robotics
Figure 4 for Deep Predictive Learning : Motion Learning Concept inspired by Cognitive Robotics
Viaarxiv icon

Force Map: Learning to Predict Contact Force Distribution from Vision

Add code
Bookmark button
Alert button
Apr 12, 2023
Ryo Hanai, Yukiyasu Domae, Ixchel G. Ramirez-Alpizar, Bruno Leme, Tetsuya Ogata

Figure 1 for Force Map: Learning to Predict Contact Force Distribution from Vision
Figure 2 for Force Map: Learning to Predict Contact Force Distribution from Vision
Figure 3 for Force Map: Learning to Predict Contact Force Distribution from Vision
Figure 4 for Force Map: Learning to Predict Contact Force Distribution from Vision
Viaarxiv icon

Deep Active Visual Attention for Real-time Robot Motion Generation: Emergence of Tool-body Assimilation and Adaptive Tool-use

Add code
Bookmark button
Alert button
Jun 29, 2022
Hyogo Hiruma, Hiroshi Ito, Hiroki Mori, Tetsuya Ogata

Figure 1 for Deep Active Visual Attention for Real-time Robot Motion Generation: Emergence of Tool-body Assimilation and Adaptive Tool-use
Figure 2 for Deep Active Visual Attention for Real-time Robot Motion Generation: Emergence of Tool-body Assimilation and Adaptive Tool-use
Figure 3 for Deep Active Visual Attention for Real-time Robot Motion Generation: Emergence of Tool-body Assimilation and Adaptive Tool-use
Figure 4 for Deep Active Visual Attention for Real-time Robot Motion Generation: Emergence of Tool-body Assimilation and Adaptive Tool-use
Viaarxiv icon

Multi-Fingered In-Hand Manipulation with Various Object Properties Using Graph Convolutional Networks and Distributed Tactile Sensors

Add code
Bookmark button
Alert button
May 09, 2022
Satoshi Funabashi, Tomoki Isobe, Fei Hongyi, Atsumu Hiramoto, Alexander Schmitz, Shigeki Sugano, Tetsuya Ogata

Figure 1 for Multi-Fingered In-Hand Manipulation with Various Object Properties Using Graph Convolutional Networks and Distributed Tactile Sensors
Figure 2 for Multi-Fingered In-Hand Manipulation with Various Object Properties Using Graph Convolutional Networks and Distributed Tactile Sensors
Figure 3 for Multi-Fingered In-Hand Manipulation with Various Object Properties Using Graph Convolutional Networks and Distributed Tactile Sensors
Figure 4 for Multi-Fingered In-Hand Manipulation with Various Object Properties Using Graph Convolutional Networks and Distributed Tactile Sensors
Viaarxiv icon

Learning Bidirectional Translation between Descriptions and Actions with Small Paired Data

Add code
Bookmark button
Alert button
Mar 08, 2022
Minori Toyoda, Kanata Suzuki, Yoshihiko Hayashi, Tetsuya Ogata

Figure 1 for Learning Bidirectional Translation between Descriptions and Actions with Small Paired Data
Figure 2 for Learning Bidirectional Translation between Descriptions and Actions with Small Paired Data
Figure 3 for Learning Bidirectional Translation between Descriptions and Actions with Small Paired Data
Figure 4 for Learning Bidirectional Translation between Descriptions and Actions with Small Paired Data
Viaarxiv icon