Picture for Katsushi Ikeuchi

Katsushi Ikeuchi

APriCoT: Action Primitives based on Contact-state Transition for In-Hand Tool Manipulation

Add code
Jul 16, 2024
Figure 1 for APriCoT: Action Primitives based on Contact-state Transition for In-Hand Tool Manipulation
Figure 2 for APriCoT: Action Primitives based on Contact-state Transition for In-Hand Tool Manipulation
Figure 3 for APriCoT: Action Primitives based on Contact-state Transition for In-Hand Tool Manipulation
Figure 4 for APriCoT: Action Primitives based on Contact-state Transition for In-Hand Tool Manipulation
Viaarxiv icon

Designing Library of Skill-Agents for Hardware-Level Reusability

Add code
Mar 04, 2024
Figure 1 for Designing Library of Skill-Agents for Hardware-Level Reusability
Figure 2 for Designing Library of Skill-Agents for Hardware-Level Reusability
Figure 3 for Designing Library of Skill-Agents for Hardware-Level Reusability
Figure 4 for Designing Library of Skill-Agents for Hardware-Level Reusability
Viaarxiv icon

Agent AI: Surveying the Horizons of Multimodal Interaction

Add code
Jan 07, 2024
Figure 1 for Agent AI: Surveying the Horizons of Multimodal Interaction
Figure 2 for Agent AI: Surveying the Horizons of Multimodal Interaction
Figure 3 for Agent AI: Surveying the Horizons of Multimodal Interaction
Figure 4 for Agent AI: Surveying the Horizons of Multimodal Interaction
Viaarxiv icon

GPT-4V for Robotics: Multimodal Task Planning from Human Demonstration

Add code
Nov 20, 2023
Figure 1 for GPT-4V for Robotics: Multimodal Task Planning from Human Demonstration
Figure 2 for GPT-4V for Robotics: Multimodal Task Planning from Human Demonstration
Figure 3 for GPT-4V for Robotics: Multimodal Task Planning from Human Demonstration
Figure 4 for GPT-4V for Robotics: Multimodal Task Planning from Human Demonstration
Viaarxiv icon

Constraint-aware Policy for Compliant Manipulation

Add code
Nov 18, 2023
Figure 1 for Constraint-aware Policy for Compliant Manipulation
Figure 2 for Constraint-aware Policy for Compliant Manipulation
Figure 3 for Constraint-aware Policy for Compliant Manipulation
Figure 4 for Constraint-aware Policy for Compliant Manipulation
Viaarxiv icon

Bias in Emotion Recognition with ChatGPT

Add code
Oct 18, 2023
Figure 1 for Bias in Emotion Recognition with ChatGPT
Figure 2 for Bias in Emotion Recognition with ChatGPT
Figure 3 for Bias in Emotion Recognition with ChatGPT
Figure 4 for Bias in Emotion Recognition with ChatGPT
Viaarxiv icon

Applying Learning-from-observation to household service robots: three common-sense formulation

Add code
Apr 19, 2023
Figure 1 for Applying Learning-from-observation to household service robots: three common-sense formulation
Figure 2 for Applying Learning-from-observation to household service robots: three common-sense formulation
Figure 3 for Applying Learning-from-observation to household service robots: three common-sense formulation
Figure 4 for Applying Learning-from-observation to household service robots: three common-sense formulation
Viaarxiv icon

ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application

Add code
Apr 18, 2023
Figure 1 for ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application
Figure 2 for ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application
Figure 3 for ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application
Figure 4 for ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application
Viaarxiv icon

Task-sequencing Simulator: Integrated Machine Learning to Execution Simulation for Robot Manipulation

Add code
Jan 03, 2023
Figure 1 for Task-sequencing Simulator: Integrated Machine Learning to Execution Simulation for Robot Manipulation
Figure 2 for Task-sequencing Simulator: Integrated Machine Learning to Execution Simulation for Robot Manipulation
Figure 3 for Task-sequencing Simulator: Integrated Machine Learning to Execution Simulation for Robot Manipulation
Figure 4 for Task-sequencing Simulator: Integrated Machine Learning to Execution Simulation for Robot Manipulation
Viaarxiv icon

Interactive Learning-from-Observation through multimodal human demonstration

Add code
Dec 21, 2022
Figure 1 for Interactive Learning-from-Observation through multimodal human demonstration
Figure 2 for Interactive Learning-from-Observation through multimodal human demonstration
Figure 3 for Interactive Learning-from-Observation through multimodal human demonstration
Figure 4 for Interactive Learning-from-Observation through multimodal human demonstration
Viaarxiv icon