This paper introduces a new method for estimating the penetration of the end effector and the parameters of a soft body using a collaborative robotic arm. This is possible using the dimensionality reduction method that simplifies the Hunt-Crossley model. The parameters can be found without a force sensor thanks to the information of the robotic arm controller. To achieve an online estimation, an extended Kalman filter is employed, which embeds the contact dynamic model. The algorithm is tested with various types of silicone, including samples with hard intrusions to simulate cancerous cells within a soft tissue. The results indicate that this technique can accurately determine the parameters and estimate the penetration of the end effector into the soft body. These promising preliminary results demonstrate the potential for robots to serve as an effective tool for early-stage cancer diagnostics.
Medical applications of robots are increasingly popular to objectivise and speed up the execution of several types of diagnostic and therapeutic interventions. Particularly important is a class of diagnostic activities that require physical contact between the robotic tool and the human body, such as palpation examinations and ultrasound scans. The practical application of these techniques can greatly benefit from an accurate estimation of the biomechanical properties of the patient's tissues. In this paper, we evaluate the accuracy and precision of a robotic device used for medical purposes in estimating the elastic parameters of different materials. The measurements are evaluated against a ground truth consisting of a set of expanded foam specimens with different elasticity that are characterised using a high-precision device. The experimental results in terms of precision are comparable with the ground truth and suggest future ambitious developments.
In this paper, we propose a robot oriented knowledge management system based on the use of the Prolog language. Our framework hinges on a special organisation of knowledge base that enables: 1. its efficient population from natural language texts using semi-automated procedures based on Large Language Models, 2. the bumpless generation of temporal parallel plans for multi-robot systems through a sequence of transformations, 3. the automated translation of the plan into an executable formalism (the behaviour trees). The framework is supported by a set of open source tools and is shown on a realistic application.
In the context of telehealth, robotic approaches have proven a valuable solution to in-person visits in remote areas, with decreased costs for patients and infection risks. In particular, in ultrasonography, robots have the potential to reproduce the skills required to acquire high-quality images while reducing the sonographer's physical efforts. In this paper, we address the control of the interaction of the probe with the patient's body, a critical aspect of ensuring safe and effective ultrasonography. We introduce a novel approach based on variable impedance control, allowing real-time optimisation of a compliant controller parameters during ultrasound procedures. This optimisation is formulated as a quadratic programming problem and incorporates physical constraints derived from viscoelastic parameter estimations. Safety and passivity constraints, including an energy tank, are also integrated to minimise potential risks during human-robot interaction. The proposed method's efficacy is demonstrated through experiments on a patient dummy torso, highlighting its potential for achieving safe behaviour and accurate force control during ultrasound procedures, even in cases of contact loss.
This paper presents a new method to describe spatio-temporal relations between objects and hands, to recognize both interactions and activities within video demonstrations of manual tasks. The approach exploits Scene Graphs to extract key interaction features from image sequences, encoding at the same time motion patterns and context. Additionally, the method introduces an event-based automatic video segmentation and clustering, which allows to group similar events, detecting also on the fly if a monitored activity is executed correctly. The effectiveness of the approach was demonstrated in two multi-subject experiments, showing the ability to recognize and cluster hand-object and object-object interactions without prior knowledge of the activity, as well as matching the same activity performed by different subjects.
Human-robot collaborative disassembly is an emerging trend in the sustainable recycling process of electronic and mechanical products. It requires the use of advanced technologies to assist workers in repetitive physical tasks and deal with creaky and potentially damaged components. Nevertheless, when disassembling worn-out or damaged components, unexpected robot behaviors may emerge, so harmless and symbiotic physical interaction with humans and the environment becomes paramount. This work addresses this challenge at the control level by ensuring safe and passive behaviors in unplanned interactions and contact losses. The proposed algorithm capitalizes on an energy-aware Cartesian impedance controller, which features energy scaling and damping injection, and an augmented energy tank, which limits the power flow from the controller to the robot. The controller is evaluated in a real-world flawed unscrewing task with a Franka Emika Panda and is compared to a standard impedance controller and a hybrid force-impedance controller. The results demonstrate the high potential of the algorithm in human-robot collaborative disassembly tasks.
The growing deployment of human-robot collaborative processes in several industrial applications, such as handling, welding, and assembly, unfolds the pursuit of systems which are able to manage large heterogeneous teams and, at the same time, monitor the execution of complex tasks. In this paper, we present a novel architecture for dynamic role allocation and collaborative task planning in a mixed human-robot team of arbitrary size. The architecture capitalizes on a centralized reactive and modular task-agnostic planning method based on Behavior Trees (BTs), in charge of actions scheduling, while the allocation problem is formulated through a Mixed-Integer Linear Program (MILP), that assigns dynamically individual roles or collaborations to the agents of the team. Different metrics used as MILP cost allow the architecture to favor various aspects of the collaboration (e.g. makespan, ergonomics, human preferences). Human preference are identified through a negotiation phase, in which, an human agent can accept/refuse to execute the assigned task.In addition, bilateral communication between humans and the system is achieved through an Augmented Reality (AR) custom user interface that provides intuitive functionalities to assist and coordinate workers in different action phases. The computational complexity of the proposed methodology outperforms literature approaches in industrial sized jobs and teams (problems up to 50 actions and 20 agents in the team with collaborations are solved within 1\;s). The different allocated roles, as the cost functions change, highlights the flexibility of the architecture to several production requirements. Finally, the subjective evaluation demonstrating the high usability level and the suitability for the targeted scenario.
By incorporating ergonomics principles into the task allocation processes, human-robot collaboration (HRC) frameworks can favour the prevention of work-related musculoskeletal disorders (WMSDs). In this context, existing offline methodologies do not account for the variability of human actions and states; therefore, planning and dynamically assigning roles in human-robot teams remains an unaddressed challenge.This study aims to create an ergonomic role allocation framework that optimises the HRC, taking into account task features and human state measurements. The presented framework consists of two main modules: the first provides the HRC task model, exploiting AND/OR Graphs (AOG)s, which we adapted to solve the allocation problem; the second module describes the ergonomic risk assessment during task execution through a risk indicator and updates the AOG-related variables to influence future task allocation. The proposed framework can be combined with any time-varying ergonomic risk indicator that evaluates human cognitive and physical burden. In this work, we tested our framework in an assembly scenario, introducing a risk index named Kinematic Wear.The overall framework has been tested with a multi-subject experiment. The task allocation results and subjective evaluations, measured with questionnaires, show that high-risk actions are correctly recognised and not assigned to humans, reducing fatigue and frustration in collaborative tasks.
This paper proposes a hybrid learning and optimization framework for mobile manipulators for complex and physically interactive tasks. The framework exploits the MOCA-MAN interface to obtain intuitive and simplified human demonstrations and Gaussian Mixture Model/Gaussian Mixture Regression to encode and generate the learned task requirements in terms of position, velocity, and force profiles. Next, using the desired trajectories and force profiles generated by GMM/GMR, the impedance parameters of a Cartesian impedance controller are optimized online through a Quadratic Program augmented with an energy tank to ensure the passivity of the controlled system. Two experiments are conducted to validate the framework, comparing our method with two approaches with constant stiffness (high and low). The results showed that the proposed method outperforms the other two cases in terms of trajectory tracking and generated interaction forces, even in the presence of disturbances such as unexpected end-effector collisions.