Alert button
Picture for Kourosh Darvish

Kourosh Darvish

Alert button

Errors are Useful Prompts: Instruction Guided Task Programming with Verifier-Assisted Iterative Prompting

Mar 24, 2023
Marta Skreta, Naruki Yoshikawa, Sebastian Arellano-Rubach, Zhi Ji, Lasse Bjørn Kristensen, Kourosh Darvish, Alán Aspuru-Guzik, Florian Shkurti, Animesh Garg

Figure 1 for Errors are Useful Prompts: Instruction Guided Task Programming with Verifier-Assisted Iterative Prompting
Figure 2 for Errors are Useful Prompts: Instruction Guided Task Programming with Verifier-Assisted Iterative Prompting
Figure 3 for Errors are Useful Prompts: Instruction Guided Task Programming with Verifier-Assisted Iterative Prompting
Figure 4 for Errors are Useful Prompts: Instruction Guided Task Programming with Verifier-Assisted Iterative Prompting

Generating low-level robot task plans from high-level natural language instructions remains a challenging problem. Although large language models have shown promising results in generating plans, the accuracy of the output remains unverified. Furthermore, the lack of domain-specific language data poses a limitation on the applicability of these models. In this paper, we propose CLAIRIFY, a novel approach that combines automatic iterative prompting with program verification to ensure programs written in data-scarce domain-specific language are syntactically valid and incorporate environment constraints. Our approach provides effective guidance to the language model on generating structured-like task plans by incorporating any errors as feedback, while the verifier ensures the syntactic accuracy of the generated plans. We demonstrate the effectiveness of CLAIRIFY in planning chemistry experiments by achieving state-of-the-art results. We also show that the generated plans can be executed on a real robot by integrating them with a task and motion planner.

Viaarxiv icon

Simultaneous Action Recognition and Human Whole-Body Motion and Dynamics Prediction from Wearable Sensors

Mar 14, 2023
Kourosh Darvish, Serena Ivaldi, Daniele Pucci

Figure 1 for Simultaneous Action Recognition and Human Whole-Body Motion and Dynamics Prediction from Wearable Sensors
Figure 2 for Simultaneous Action Recognition and Human Whole-Body Motion and Dynamics Prediction from Wearable Sensors
Figure 3 for Simultaneous Action Recognition and Human Whole-Body Motion and Dynamics Prediction from Wearable Sensors
Figure 4 for Simultaneous Action Recognition and Human Whole-Body Motion and Dynamics Prediction from Wearable Sensors

This paper presents a novel approach to solve simultaneously the problems of human activity recognition and whole-body motion and dynamics prediction for real-time applications. Starting from the dynamics of human motion and motor system theory, the notion of mixture of experts from deep learning has been extended to address this problem. In the proposed approach, experts are modelled as a sequence-to-sequence recurrent neural networks (RNN) architecture. Experiments show the results of 66-DoF real-world human motion prediction and action recognition during different tasks like walking and rotating. The code associated with this paper is available at: \url{github.com/ami-iit/paper_darvish_2022_humanoids_action-kindyn-predicition}

Viaarxiv icon

Teleoperation of Humanoid Robots: A Survey

Jan 11, 2023
Kourosh Darvish, Luigi Penco, Joao Ramos, Rafael Cisneros, Jerry Pratt, Eiichi Yoshida, Serena Ivaldi, Daniele Pucci

Figure 1 for Teleoperation of Humanoid Robots: A Survey
Figure 2 for Teleoperation of Humanoid Robots: A Survey
Figure 3 for Teleoperation of Humanoid Robots: A Survey
Figure 4 for Teleoperation of Humanoid Robots: A Survey

Teleoperation of humanoid robots enables the integration of the cognitive skills and domain expertise of humans with the physical capabilities of humanoid robots. The operational versatility of humanoid robots makes them the ideal platform for a wide range of applications when teleoperating in a remote environment. However, the complexity of humanoid robots imposes challenges for teleoperation, particularly in unstructured dynamic environments with limited communication. Many advancements have been achieved in the last decades in this area, but a comprehensive overview is still missing. This survey paper gives an extensive overview of humanoid robot teleoperation, presenting the general architecture of a teleoperation system and analyzing the different components. We also discuss different aspects of the topic, including technological and methodological advances, as well as potential applications. A web-based version of the paper can be found at https://humanoid-teleoperation.github.io/.

Viaarxiv icon

An Adaptive Robotics Framework for Chemistry Lab Automation

Dec 19, 2022
Naruki Yoshikawa, Andrew Zou Li, Kourosh Darvish, Yuchi Zhao, Haoping Xu, Alan Aspuru-Guzik, Animesh Garg, Florian Shkurti

Figure 1 for An Adaptive Robotics Framework for Chemistry Lab Automation
Figure 2 for An Adaptive Robotics Framework for Chemistry Lab Automation
Figure 3 for An Adaptive Robotics Framework for Chemistry Lab Automation
Figure 4 for An Adaptive Robotics Framework for Chemistry Lab Automation

In the process of materials discovery, chemists currently need to perform many laborious, time-consuming, and often dangerous lab experiments. To accelerate this process, we propose a framework for robots to assist chemists by performing lab experiments autonomously. The solution allows a general-purpose robot to perform diverse chemistry experiments and efficiently make use of available lab tools. Our system can load high-level descriptions of chemistry experiments, perceive a dynamic workspace, and autonomously plan the required actions and motions to perform the given chemistry experiments with common tools found in the existing lab environment. Our architecture uses a modified PDDLStream solver for integrated task and constrained motion planning, which generates plans and motions that are guaranteed to be safe by preventing collisions and spillage. We present a modular framework that can scale to many different experiments, actions, and lab tools. In this work, we demonstrate the utility of our framework on three pouring skills and two foundational chemical experiments for materials synthesis: solubility and recrystallization. More experiments and updated evaluations can be found at https://ac-rad.github.io/arc-icra2023.

* Equal author contribution from Naruki Yoshikawa, Andrew Zou Li, Kourosh Darvish, Yuchi Zhao and Haoping Xu 
Viaarxiv icon

iCub3 Avatar System

Mar 14, 2022
Stefano Dafarra, Kourosh Darvish, Riccardo Grieco, Gianluca Milani, Ugo Pattacini, Lorenzo Rapetti, Giulio Romualdi, Mattia Salvi, Alessandro Scalzo, Ines Sorrentino, Davide Tomè, Silvio Traversaro, Enrico Valli, Paolo Maria Viceconte, Giorgio Metta, Marco Maggiali, Daniele Pucci

Figure 1 for iCub3 Avatar System
Figure 2 for iCub3 Avatar System
Figure 3 for iCub3 Avatar System
Figure 4 for iCub3 Avatar System

We present an avatar system that enables a human operator to visit a remote location via iCub3, a new humanoid robot developed at the Italian Institute of Technology (IIT) paving the way for the next generation of the iCub platforms. On the one hand, we present the humanoid iCub3 that plays the role of the robotic avatar. Particular attention is paid to the differences between iCub3 and the classical iCub humanoid robot. On the other hand, we present the set of technologies of the avatar system at the operator side. They are mainly composed of iFeel, namely, IIT lightweight non-invasive wearable devices for motion tracking and haptic feedback, and of non-IIT technologies designed for virtual reality ecosystems. Finally, we show the effectiveness of the avatar system by describing a demonstration involving a realtime teleoperation of the iCub3. The robot is located in Venice, Biennale di Venezia, while the human operator is at more than 290km distance and located in Genoa, IIT. Using a standard fiber optic internet connection, the avatar system transports the operator locomotion, manipulation, voice, and face expressions to the iCub3 with visual, auditory, haptic and touch feedback.

Viaarxiv icon

A Task Allocation Approach for Human-Robot Collaboration in Product Defects Inspection Scenarios

Sep 14, 2020
Hossein Karami, Kourosh Darvish, Fulvio Mastrogiovanni

Figure 1 for A Task Allocation Approach for Human-Robot Collaboration in Product Defects Inspection Scenarios
Figure 2 for A Task Allocation Approach for Human-Robot Collaboration in Product Defects Inspection Scenarios
Figure 3 for A Task Allocation Approach for Human-Robot Collaboration in Product Defects Inspection Scenarios
Figure 4 for A Task Allocation Approach for Human-Robot Collaboration in Product Defects Inspection Scenarios

The presence and coexistence of human operators and collaborative robots in shop-floor environments raises the need for assigning tasks to either operators or robots, or both. Depending on task characteristics, operator capabilities and the involved robot functionalities, it is of the utmost importance to design strategies allowing for the concurrent and/or sequential allocation of tasks related to object manipulation and assembly. In this paper, we extend the \textsc{FlexHRC} framework presented in \cite{darvish2018flexible} to allow a human operator to interact with multiple, heterogeneous robots at the same time in order to jointly carry out a given task. The extended \textsc{FlexHRC} framework leverages a concurrent and sequential task representation framework to allocate tasks to either operators or robots as part of a dynamic collaboration process. In particular, we focus on a use case related to the inspection of product defects, which involves a human operator, a dual-arm Baxter manipulator from Rethink Robotics and a Kuka youBot mobile manipulator.

* 8 pages, 5 figures,The 29th IEEE International Conference on Robot & Human Interactive Communication 
Viaarxiv icon

A Hierarchical Architecture for Human-Robot Cooperation Processes

Sep 06, 2020
Kourosh Darvish, Enrico Simetti, Fulvio Mastrogiovanni, Giuseppe Casalino

Figure 1 for A Hierarchical Architecture for Human-Robot Cooperation Processes
Figure 2 for A Hierarchical Architecture for Human-Robot Cooperation Processes
Figure 3 for A Hierarchical Architecture for Human-Robot Cooperation Processes
Figure 4 for A Hierarchical Architecture for Human-Robot Cooperation Processes

In this paper we propose FlexHRC+, a hierarchical human-robot cooperation architecture designed to provide collaborative robots with an extended degree of autonomy when supporting human operators in high-variability shop-floor tasks. The architecture encompasses three levels, namely for perception, representation, and action. Building up on previous work, here we focus on (i) an in-the-loop decision making process for the operations of collaborative robots coping with the variability of actions carried out by human operators, and (ii) the representation level, integrating a hierarchical AND/OR graph whose online behaviour is formally specified using First Order Logic. The architecture is accompanied by experiments including collaborative furniture assembly and object positioning tasks.

Viaarxiv icon

Deployment and Evaluation of a Flexible Human-Robot Collaboration Model Based on AND/OR Graphs in a Manufacturing Environment

Jul 13, 2020
Prajval Kumar Murali, Kourosh Darvish, Fulvio Mastrogiovanni

Figure 1 for Deployment and Evaluation of a Flexible Human-Robot Collaboration Model Based on AND/OR Graphs in a Manufacturing Environment
Figure 2 for Deployment and Evaluation of a Flexible Human-Robot Collaboration Model Based on AND/OR Graphs in a Manufacturing Environment
Figure 3 for Deployment and Evaluation of a Flexible Human-Robot Collaboration Model Based on AND/OR Graphs in a Manufacturing Environment
Figure 4 for Deployment and Evaluation of a Flexible Human-Robot Collaboration Model Based on AND/OR Graphs in a Manufacturing Environment

The Industry 4.0 paradigm promises shorter development times, increased ergonomy, higher flexibility, and resource efficiency in manufacturing environments. Collaborative robots are an important tangible technology for implementing such a paradigm. A major bottleneck to effectively deploy collaborative robots to manufacturing industries is developing task planning algorithms that enable them to recognize and naturally adapt to varying and even unpredictable human actions while simultaneously ensuring an overall efficiency in terms of production cycle time. In this context, an architecture encompassing task representation, task planning, sensing, and robot control has been designed, developed and evaluated in a real industrial environment. A pick-and-place palletization task, which requires the collaboration between humans and robots, is investigated. The architecture uses AND/OR graphs for representing and reasoning upon human-robot collaboration models online. Furthermore, objective measures of the overall computational performance and subjective measures of naturalness in human-robot collaboration have been evaluated by performing experiments with production-line operators. The results of this user study demonstrate how human-robot collaboration models like the one we propose can leverage the flexibility and the comfort of operators in the workplace. In this regard, an extensive comparison study among recent models has been carried out.

Viaarxiv icon

Recent Advances in Human-Robot Collaboration Towards Joint Action

Jan 02, 2020
Yeshasvi Tirupachuri, Gabriele Nava, Lorenzo Rapetti, Claudia Latella, Kourosh Darvish, Daniele Pucci

Figure 1 for Recent Advances in Human-Robot Collaboration Towards Joint Action
Figure 2 for Recent Advances in Human-Robot Collaboration Towards Joint Action
Figure 3 for Recent Advances in Human-Robot Collaboration Towards Joint Action
Figure 4 for Recent Advances in Human-Robot Collaboration Towards Joint Action

Robots existed as separate entities till now, but the horizons of a symbiotic human-robot partnership are impending. Despite all the recent technical advances in terms of hardware, robots are still not endowed with desirable relational skills that ensure a social component in their existence. This article draws from our experience as roboticists in Human-Robot Collaboration (HRC) with humanoid robots and presents some of the recent advances made towards realizing intuitive robot behaviors and partner-aware control involving physical interactions.

* Extended Abstract Accepted and Presented at The Communication Challenges in Joint Action for Human-Robot Interaction Workshop, International Conference on Social Robotics (ICSR) 2019, Madrid, Spain 
Viaarxiv icon

Whole-Body Geometric Retargeting for Humanoid Robots

Sep 22, 2019
Kourosh Darvish, Yeshasvi Tirupachuri, Giulio Romualdi, Lorenzo Rapetti, Diego Ferigo, Francisco Javier Andrade Chavez, Daniele Pucci

Figure 1 for Whole-Body Geometric Retargeting for Humanoid Robots
Figure 2 for Whole-Body Geometric Retargeting for Humanoid Robots
Figure 3 for Whole-Body Geometric Retargeting for Humanoid Robots
Figure 4 for Whole-Body Geometric Retargeting for Humanoid Robots

Humanoid robot teleoperation allows humans to integrate their cognitive capabilities with the apparatus to perform tasks that need high strength, manoeuvrability and dexterity. This paper presents a framework for teleoperation of humanoid robots using a novel approach for motion retargeting through inverse kinematics over the robot model. The proposed method enhances scalability for retargeting, i.e., it allows teleoperating different robots by different human users with minimal changes to the proposed system. Our framework enables an intuitive and natural interaction between the human operator and the humanoid robot at the configuration space level. We validate our approach by demonstrating whole-body retargeting with multiple robot models. Furthermore, we present experimental validation through teleoperation experiments using two state-of-the-art whole-body controllers for humanoid robots.

* 2019 IEEE-RAS International Conference on Humanoid Robots  
* Equal author contribution from Kourosh Darvish and Yeshasvi Tirupachuri 
Viaarxiv icon