Alert button
Picture for Jun Takamatsu

Jun Takamatsu

Alert button

Designing Library of Skill-Agents for Hardware-Level Reusability

Add code
Bookmark button
Alert button
Mar 04, 2024
Jun Takamatsu, Daichi Saito, Katsushi Ikeuchi, Atsushi Kanehira, Kazuhiro Sasabuchi, Naoki Wake

Figure 1 for Designing Library of Skill-Agents for Hardware-Level Reusability
Figure 2 for Designing Library of Skill-Agents for Hardware-Level Reusability
Figure 3 for Designing Library of Skill-Agents for Hardware-Level Reusability
Figure 4 for Designing Library of Skill-Agents for Hardware-Level Reusability
Viaarxiv icon

GPT-4V(ision) for Robotics: Multimodal Task Planning from Human Demonstration

Add code
Bookmark button
Alert button
Nov 20, 2023
Naoki Wake, Atsushi Kanehira, Kazuhiro Sasabuchi, Jun Takamatsu, Katsushi Ikeuchi

Viaarxiv icon

Constraint-aware Policy for Compliant Manipulation

Add code
Bookmark button
Alert button
Nov 18, 2023
Daichi Saito, Kazuhiro Sasabuchi, Naoki Wake, Atsushi Kanehira, Jun Takamatsu, Hideki Koike, Katsushi Ikeuchi

Viaarxiv icon

Bias in Emotion Recognition with ChatGPT

Add code
Bookmark button
Alert button
Oct 18, 2023
Naoki Wake, Atsushi Kanehira, Kazuhiro Sasabuchi, Jun Takamatsu, Katsushi Ikeuchi

Figure 1 for Bias in Emotion Recognition with ChatGPT
Figure 2 for Bias in Emotion Recognition with ChatGPT
Figure 3 for Bias in Emotion Recognition with ChatGPT
Figure 4 for Bias in Emotion Recognition with ChatGPT
Viaarxiv icon

Applying Learning-from-observation to household service robots: three common-sense formulation

Add code
Bookmark button
Alert button
Apr 19, 2023
Katsushi Ikeuchi, Jun Takamatsu, Kazuhiro Sasabuchi, Naoki Wake, Atsushi Kanehiro

Figure 1 for Applying Learning-from-observation to household service robots: three common-sense formulation
Figure 2 for Applying Learning-from-observation to household service robots: three common-sense formulation
Figure 3 for Applying Learning-from-observation to household service robots: three common-sense formulation
Figure 4 for Applying Learning-from-observation to household service robots: three common-sense formulation
Viaarxiv icon

ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application

Add code
Bookmark button
Alert button
Apr 18, 2023
Naoki Wake, Atsushi Kanehira, Kazuhiro Sasabuchi, Jun Takamatsu, Katsushi Ikeuchi

Figure 1 for ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application
Figure 2 for ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application
Figure 3 for ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application
Figure 4 for ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application
Viaarxiv icon

Bounding Box Annotation with Visible Status

Add code
Bookmark button
Alert button
Apr 11, 2023
Takuya Kiyokawa, Naoki Shirakura, Hiroki Katayama, Keita Tomochika, Jun Takamatsu

Figure 1 for Bounding Box Annotation with Visible Status
Figure 2 for Bounding Box Annotation with Visible Status
Figure 3 for Bounding Box Annotation with Visible Status
Figure 4 for Bounding Box Annotation with Visible Status
Viaarxiv icon

Task-sequencing Simulator: Integrated Machine Learning to Execution Simulation for Robot Manipulation

Add code
Bookmark button
Alert button
Jan 03, 2023
Kazuhiro Sasabuchi, Daichi Saito, Atsushi Kanehira, Naoki Wake, Jun Takamatsu, Katsushi Ikeuchi

Figure 1 for Task-sequencing Simulator: Integrated Machine Learning to Execution Simulation for Robot Manipulation
Figure 2 for Task-sequencing Simulator: Integrated Machine Learning to Execution Simulation for Robot Manipulation
Figure 3 for Task-sequencing Simulator: Integrated Machine Learning to Execution Simulation for Robot Manipulation
Figure 4 for Task-sequencing Simulator: Integrated Machine Learning to Execution Simulation for Robot Manipulation
Viaarxiv icon

Interactive Learning-from-Observation through multimodal human demonstration

Add code
Bookmark button
Alert button
Dec 21, 2022
Naoki Wake, Atsushi Kanehira, Kazuhiro Sasabuchi, Jun Takamatsu, Katsushi Ikeuchi

Figure 1 for Interactive Learning-from-Observation through multimodal human demonstration
Figure 2 for Interactive Learning-from-Observation through multimodal human demonstration
Figure 3 for Interactive Learning-from-Observation through multimodal human demonstration
Figure 4 for Interactive Learning-from-Observation through multimodal human demonstration
Viaarxiv icon