Alert button
Picture for Hiroki Mori

Hiroki Mori

Alert button

A Peg-in-hole Task Strategy for Holes in Concrete

Add code
Bookmark button
Alert button
Mar 29, 2024
André Yuji Yasutomi, Hiroki Mori, Tetsuya Ogata

Figure 1 for A Peg-in-hole Task Strategy for Holes in Concrete
Figure 2 for A Peg-in-hole Task Strategy for Holes in Concrete
Figure 3 for A Peg-in-hole Task Strategy for Holes in Concrete
Figure 4 for A Peg-in-hole Task Strategy for Holes in Concrete
Viaarxiv icon

Visual Spatial Attention and Proprioceptive Data-Driven Reinforcement Learning for Robust Peg-in-Hole Task Under Variable Conditions

Add code
Bookmark button
Alert button
Dec 27, 2023
André Yuji Yasutomi, Hideyuki Ichiwara, Hiroshi Ito, Hiroki Mori, Tetsuya Ogata

Viaarxiv icon

Real-time Motion Generation and Data Augmentation for Grasping Moving Objects with Dynamic Speed and Position Changes

Add code
Bookmark button
Alert button
Sep 22, 2023
Kenjiro Yamamoto, Hiroshi Ito, Hideyuki Ichiwara, Hiroki Mori, Tetsuya Ogata

Figure 1 for Real-time Motion Generation and Data Augmentation for Grasping Moving Objects with Dynamic Speed and Position Changes
Figure 2 for Real-time Motion Generation and Data Augmentation for Grasping Moving Objects with Dynamic Speed and Position Changes
Figure 3 for Real-time Motion Generation and Data Augmentation for Grasping Moving Objects with Dynamic Speed and Position Changes
Figure 4 for Real-time Motion Generation and Data Augmentation for Grasping Moving Objects with Dynamic Speed and Position Changes
Viaarxiv icon

A generative framework for conversational laughter: Its 'language model' and laughter sound synthesis

Add code
Bookmark button
Alert button
Jun 06, 2023
Hiroki Mori, Shunya Kimura

Viaarxiv icon

Deep Active Visual Attention for Real-time Robot Motion Generation: Emergence of Tool-body Assimilation and Adaptive Tool-use

Add code
Bookmark button
Alert button
Jun 29, 2022
Hyogo Hiruma, Hiroshi Ito, Hiroki Mori, Tetsuya Ogata

Figure 1 for Deep Active Visual Attention for Real-time Robot Motion Generation: Emergence of Tool-body Assimilation and Adaptive Tool-use
Figure 2 for Deep Active Visual Attention for Real-time Robot Motion Generation: Emergence of Tool-body Assimilation and Adaptive Tool-use
Figure 3 for Deep Active Visual Attention for Real-time Robot Motion Generation: Emergence of Tool-body Assimilation and Adaptive Tool-use
Figure 4 for Deep Active Visual Attention for Real-time Robot Motion Generation: Emergence of Tool-body Assimilation and Adaptive Tool-use
Viaarxiv icon

Collision-free Path Planning on Arbitrary Optimization Criteria in the Latent Space through cGANs

Add code
Bookmark button
Alert button
Feb 26, 2022
Tomoki Ando, Hiroto Iino, Hiroki Mori, Ryota Torishima, Kuniyuki Takahashi, Shoichiro Yamaguchi, Daisuke Okanohara, Tetsuya Ogata

Figure 1 for Collision-free Path Planning on Arbitrary Optimization Criteria in the Latent Space through cGANs
Figure 2 for Collision-free Path Planning on Arbitrary Optimization Criteria in the Latent Space through cGANs
Figure 3 for Collision-free Path Planning on Arbitrary Optimization Criteria in the Latent Space through cGANs
Figure 4 for Collision-free Path Planning on Arbitrary Optimization Criteria in the Latent Space through cGANs
Viaarxiv icon

Guided Visual Attention Model Based on Interactions Between Top-down and Bottom-up Information for Robot Pose Prediction

Add code
Bookmark button
Alert button
Feb 21, 2022
Hyogo Hiruma, Hiroki Mori, Tetsuya Ogata

Figure 1 for Guided Visual Attention Model Based on Interactions Between Top-down and Bottom-up Information for Robot Pose Prediction
Figure 2 for Guided Visual Attention Model Based on Interactions Between Top-down and Bottom-up Information for Robot Pose Prediction
Figure 3 for Guided Visual Attention Model Based on Interactions Between Top-down and Bottom-up Information for Robot Pose Prediction
Figure 4 for Guided Visual Attention Model Based on Interactions Between Top-down and Bottom-up Information for Robot Pose Prediction
Viaarxiv icon

Collision-free Path Planning in the Latent Space through cGANs

Add code
Bookmark button
Alert button
Feb 15, 2022
Tomoki Ando, Hiroki Mori, Ryota Torishima, Kuniyuki Takahashi, Shoichiro Yamaguchi, Daisuke Okanohara, Tetsuya Ogata

Figure 1 for Collision-free Path Planning in the Latent Space through cGANs
Figure 2 for Collision-free Path Planning in the Latent Space through cGANs
Figure 3 for Collision-free Path Planning in the Latent Space through cGANs
Figure 4 for Collision-free Path Planning in the Latent Space through cGANs
Viaarxiv icon

Contact-Rich Manipulation of a Flexible Object based on Deep Predictive Learning using Vision and Tactility

Add code
Bookmark button
Alert button
Dec 13, 2021
Hideyuki Ichiwara, Hiroshi Ito, Kenjiro Yamamoto, Hiroki Mori, Tetsuya Ogata

Figure 1 for Contact-Rich Manipulation of a Flexible Object based on Deep Predictive Learning using Vision and Tactility
Figure 2 for Contact-Rich Manipulation of a Flexible Object based on Deep Predictive Learning using Vision and Tactility
Figure 3 for Contact-Rich Manipulation of a Flexible Object based on Deep Predictive Learning using Vision and Tactility
Figure 4 for Contact-Rich Manipulation of a Flexible Object based on Deep Predictive Learning using Vision and Tactility
Viaarxiv icon

How to select and use tools? : Active Perception of Target Objects Using Multimodal Deep Learning

Add code
Bookmark button
Alert button
Jun 04, 2021
Namiko Saito, Tetsuya Ogata, Satoshi Funabashi, Hiroki Mori, Shigeki Sugano

Figure 1 for How to select and use tools? : Active Perception of Target Objects Using Multimodal Deep Learning
Figure 2 for How to select and use tools? : Active Perception of Target Objects Using Multimodal Deep Learning
Figure 3 for How to select and use tools? : Active Perception of Target Objects Using Multimodal Deep Learning
Figure 4 for How to select and use tools? : Active Perception of Target Objects Using Multimodal Deep Learning
Viaarxiv icon