Alert button
Picture for Tetsunari Inamura

Tetsunari Inamura

Alert button

Latent Representation in Human-Robot Interaction with Explicit Consideration of Periodic Dynamics

Add code
Bookmark button
Alert button
Jun 16, 2021
Taisuke Kobayashi, Shingo Murata, Tetsunari Inamura

Figure 1 for Latent Representation in Human-Robot Interaction with Explicit Consideration of Periodic Dynamics
Figure 2 for Latent Representation in Human-Robot Interaction with Explicit Consideration of Periodic Dynamics
Figure 3 for Latent Representation in Human-Robot Interaction with Explicit Consideration of Periodic Dynamics
Figure 4 for Latent Representation in Human-Robot Interaction with Explicit Consideration of Periodic Dynamics
Viaarxiv icon

SIGVerse: A cloud-based VR platform for research on social and embodied human-robot interaction

Add code
Bookmark button
Alert button
May 02, 2020
Tetsunari Inamura, Yoshiaki Mizuchi

Figure 1 for SIGVerse: A cloud-based VR platform for research on social and embodied human-robot interaction
Figure 2 for SIGVerse: A cloud-based VR platform for research on social and embodied human-robot interaction
Figure 3 for SIGVerse: A cloud-based VR platform for research on social and embodied human-robot interaction
Figure 4 for SIGVerse: A cloud-based VR platform for research on social and embodied human-robot interaction
Viaarxiv icon

Spatial Concept-Based Navigation with Human Speech Instructions via Probabilistic Inference on Bayesian Generative Model

Add code
Bookmark button
Alert button
Feb 18, 2020
Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura

Figure 1 for Spatial Concept-Based Navigation with Human Speech Instructions via Probabilistic Inference on Bayesian Generative Model
Figure 2 for Spatial Concept-Based Navigation with Human Speech Instructions via Probabilistic Inference on Bayesian Generative Model
Figure 3 for Spatial Concept-Based Navigation with Human Speech Instructions via Probabilistic Inference on Bayesian Generative Model
Figure 4 for Spatial Concept-Based Navigation with Human Speech Instructions via Probabilistic Inference on Bayesian Generative Model
Viaarxiv icon

Learning multimodal representations for sample-efficient recognition of human actions

Add code
Bookmark button
Alert button
Mar 06, 2019
Miguel Vasco, Francisco S. Melo, David Martins de Matos, Ana Paiva, Tetsunari Inamura

Figure 1 for Learning multimodal representations for sample-efficient recognition of human actions
Figure 2 for Learning multimodal representations for sample-efficient recognition of human actions
Figure 3 for Learning multimodal representations for sample-efficient recognition of human actions
Figure 4 for Learning multimodal representations for sample-efficient recognition of human actions
Viaarxiv icon

Improved and Scalable Online Learning of Spatial Concepts and Language Models with Mapping

Add code
Bookmark button
Alert button
Jan 04, 2019
Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura

Figure 1 for Improved and Scalable Online Learning of Spatial Concepts and Language Models with Mapping
Figure 2 for Improved and Scalable Online Learning of Spatial Concepts and Language Models with Mapping
Figure 3 for Improved and Scalable Online Learning of Spatial Concepts and Language Models with Mapping
Figure 4 for Improved and Scalable Online Learning of Spatial Concepts and Language Models with Mapping
Viaarxiv icon

Online Spatial Concept and Lexical Acquisition with Simultaneous Localization and Mapping

Add code
Bookmark button
Alert button
Mar 09, 2018
Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura

Figure 1 for Online Spatial Concept and Lexical Acquisition with Simultaneous Localization and Mapping
Figure 2 for Online Spatial Concept and Lexical Acquisition with Simultaneous Localization and Mapping
Figure 3 for Online Spatial Concept and Lexical Acquisition with Simultaneous Localization and Mapping
Figure 4 for Online Spatial Concept and Lexical Acquisition with Simultaneous Localization and Mapping
Viaarxiv icon

SpCoSLAM 2.0: An Improved and Scalable Online Learning of Spatial Concepts and Language Models with Mapping

Add code
Bookmark button
Alert button
Mar 09, 2018
Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura

Figure 1 for SpCoSLAM 2.0: An Improved and Scalable Online Learning of Spatial Concepts and Language Models with Mapping
Figure 2 for SpCoSLAM 2.0: An Improved and Scalable Online Learning of Spatial Concepts and Language Models with Mapping
Figure 3 for SpCoSLAM 2.0: An Improved and Scalable Online Learning of Spatial Concepts and Language Models with Mapping
Figure 4 for SpCoSLAM 2.0: An Improved and Scalable Online Learning of Spatial Concepts and Language Models with Mapping
Viaarxiv icon

Bayesian Body Schema Estimation using Tactile Information obtained through Coordinated Random Movements

Add code
Bookmark button
Alert button
Dec 01, 2016
Tomohiro Mimura, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura

Figure 1 for Bayesian Body Schema Estimation using Tactile Information obtained through Coordinated Random Movements
Figure 2 for Bayesian Body Schema Estimation using Tactile Information obtained through Coordinated Random Movements
Figure 3 for Bayesian Body Schema Estimation using Tactile Information obtained through Coordinated Random Movements
Figure 4 for Bayesian Body Schema Estimation using Tactile Information obtained through Coordinated Random Movements
Viaarxiv icon

Spatial Concept Acquisition for a Mobile Robot that Integrates Self-Localization and Unsupervised Word Discovery from Spoken Sentences

Add code
Bookmark button
Alert button
May 07, 2016
Akira Taniguchi, Tadahiro Taniguchi, Tetsunari Inamura

Figure 1 for Spatial Concept Acquisition for a Mobile Robot that Integrates Self-Localization and Unsupervised Word Discovery from Spoken Sentences
Figure 2 for Spatial Concept Acquisition for a Mobile Robot that Integrates Self-Localization and Unsupervised Word Discovery from Spoken Sentences
Figure 3 for Spatial Concept Acquisition for a Mobile Robot that Integrates Self-Localization and Unsupervised Word Discovery from Spoken Sentences
Figure 4 for Spatial Concept Acquisition for a Mobile Robot that Integrates Self-Localization and Unsupervised Word Discovery from Spoken Sentences
Viaarxiv icon