Alert button
Picture for Darwin G. Caldwell

Darwin G. Caldwell

Alert button

Kinematically-Decoupled Impedance Control for Fast Object Visual Servoing and Grasping on Quadruped Manipulators

Jul 10, 2023
Riccardo Parosi, Mattia Risiglione, Darwin G. Caldwell, Claudio Semini, Victor Barasuol

Figure 1 for Kinematically-Decoupled Impedance Control for Fast Object Visual Servoing and Grasping on Quadruped Manipulators
Figure 2 for Kinematically-Decoupled Impedance Control for Fast Object Visual Servoing and Grasping on Quadruped Manipulators
Figure 3 for Kinematically-Decoupled Impedance Control for Fast Object Visual Servoing and Grasping on Quadruped Manipulators
Figure 4 for Kinematically-Decoupled Impedance Control for Fast Object Visual Servoing and Grasping on Quadruped Manipulators

We propose a control pipeline for SAG (Searching, Approaching, and Grasping) of objects, based on a decoupled arm kinematic chain and impedance control, which integrates image-based visual servoing (IBVS). The kinematic decoupling allows for fast end-effector motions and recovery that leads to robust visual servoing. The whole approach and pipeline can be generalized for any mobile platform (wheeled or tracked vehicles), but is most suitable for dynamically moving quadruped manipulators thanks to their reactivity against disturbances. The compliance of the impedance controller makes the robot safer for interactions with humans and the environment. We demonstrate the performance and robustness of the proposed approach with various experiments on our 140 kg HyQReal quadruped robot equipped with a 7-DoF manipulator arm. The experiments consider dynamic locomotion, tracking under external disturbances, and fast motions of the target object.

* Accepted as contributed paper at 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023) 
Viaarxiv icon

Reactive Landing Controller for Quadruped Robots

May 12, 2023
Francesco Roscia, Michele Focchi, Andrea Del Prete, Darwin G. Caldwell, Claudio Semini

Figure 1 for Reactive Landing Controller for Quadruped Robots
Figure 2 for Reactive Landing Controller for Quadruped Robots
Figure 3 for Reactive Landing Controller for Quadruped Robots
Figure 4 for Reactive Landing Controller for Quadruped Robots

Quadruped robots are machines intended for challenging and harsh environments. Despite the progress in locomotion strategy, safely recovering from unexpected falls or planned drops is still an open problem. It is further made more difficult when high horizontal velocities are involved. In this work, we propose an optimization-based reactive Landing Controller that uses only proprioceptive measures for torque-controlled quadruped robots that free-fall on a flat horizontal ground, knowing neither the distance to the landing surface nor the flight time. Based on an estimate of the Center of Mass horizontal velocity, the method uses the Variable Height Springy Inverted Pendulum model for continuously recomputing the feet position while the robot is falling. In this way, the quadruped is ready to attain a successful landing in all directions, even in the presence of significant horizontal velocities. The method is demonstrated to dramatically enlarge the region of horizontal velocities that can be dealt with by a naive approach that keeps the feet still during the airborne stage. To the best of our knowledge, this is the first time that a quadruped robot can successfully recover from falls with horizontal velocities up to 3 m/s in simulation. Experiments prove that the used platform, Go1, can successfully attain a stable standing configuration from falls with various horizontal velocity and different angular perturbations.

* 8 pages, 5 figures, 2 tables, submitted to ral, accompanying video at https://youtu.be/KnmNbhkOKWI 
Viaarxiv icon

Control of a Back-Support Exoskeleton to Assist Carrying Activities

May 11, 2023
Maria Lazzaroni, Giorgia Chini, Francesco Draicchio, Christian Di Natali, Darwin G. Caldwell, Jesús Ortiz

Figure 1 for Control of a Back-Support Exoskeleton to Assist Carrying Activities
Figure 2 for Control of a Back-Support Exoskeleton to Assist Carrying Activities
Figure 3 for Control of a Back-Support Exoskeleton to Assist Carrying Activities
Figure 4 for Control of a Back-Support Exoskeleton to Assist Carrying Activities

Back-support exoskeletons are commonly used in the workplace to reduce low back pain risk for workers performing demanding activities. However, for the assistance of tasks differing from lifting, back-support exoskeletons potential has not been exploited extensively. This work focuses on the use of an active back-support exoskeleton to assist carrying. Two control strategies are designed that modulate the exoskeleton torques to comply with the task assistance requirements. In particular, two gait phase detection frameworks are exploited to adapt the assistance according to the legs' motion. The two strategies are assessed through an experimental analysis on ten subjects. Carrying task is performed without and with the exoskeleton assistance. Results prove the potential of the presented controls in assisting the task without hindering the gait movement and improving the usability experienced by users. Moreover, the exoskeleton assistance significantly reduces the lumbar load associated with the task, demonstrating its promising use for risk mitigation in the workplace.

* submitted to 2023 IEEE International Conference on Rehabilitation Robotics (ICORR) 
Viaarxiv icon

Learning Skills from Demonstrations: A Trend from Motion Primitives to Experience Abstraction

Oct 14, 2022
Mehrdad Tavassoli, Sunny Katyara, Maria Pozzi, Nikhil Deshpande, Darwin G. Caldwell, Domenico Prattichizzo

Figure 1 for Learning Skills from Demonstrations: A Trend from Motion Primitives to Experience Abstraction
Figure 2 for Learning Skills from Demonstrations: A Trend from Motion Primitives to Experience Abstraction
Figure 3 for Learning Skills from Demonstrations: A Trend from Motion Primitives to Experience Abstraction
Figure 4 for Learning Skills from Demonstrations: A Trend from Motion Primitives to Experience Abstraction

The uses of robots are changing from static environments in factories to encompass novel concepts such as Human-Robot Collaboration in unstructured settings. Pre-programming all the functionalities for robots becomes impractical, and hence, robots need to learn how to react to new events autonomously, just like humans. However, humans, unlike machines, are naturally skilled in responding to unexpected circumstances based on either experiences or observations. Hence, embedding such anthropoid behaviours into robots entails the development of neuro-cognitive models that emulate motor skills under a robot learning paradigm. Effective encoding of these skills is bound to the proper choice of tools and techniques. This paper studies different motion and behaviour learning methods ranging from Movement Primitives (MP) to Experience Abstraction (EA), applied to different robotic tasks. These methods are scrutinized and then experimentally benchmarked by reconstructing a standard pick-n-place task. Apart from providing a standard guideline for the selection of strategies and algorithms, this paper aims to draw a perspectives on their possible extensions and improvements

* Under Review at IEEE TCDS for future Publication 
Viaarxiv icon

A Whole-Body Controller Based on a Simplified Template for Rendering Impedances in Quadruped Manipulators

Aug 01, 2022
Mattia Risiglione, Victor Barasuol, Darwin G. Caldwell, Claudio Semini

Figure 1 for A Whole-Body Controller Based on a Simplified Template for Rendering Impedances in Quadruped Manipulators
Figure 2 for A Whole-Body Controller Based on a Simplified Template for Rendering Impedances in Quadruped Manipulators
Figure 3 for A Whole-Body Controller Based on a Simplified Template for Rendering Impedances in Quadruped Manipulators
Figure 4 for A Whole-Body Controller Based on a Simplified Template for Rendering Impedances in Quadruped Manipulators

Quadrupedal manipulators require to be compliant when dealing with external forces during autonomous manipulation, tele-operation or physical human-robot interaction. This paper presents a whole-body controller that allows for the implementation of a Cartesian impedance control to coordinate tracking performance and desired compliance for the robot base and manipulator arm. The controller is formulated through an optimization problem using Quadratic Programming (QP) to impose a desired behavior for the system while satisfying friction cone constraints, unilateral force constraints, joint and torque limits. The presented strategy decouples the arm and the base of the platform, enforcing the behavior of a linear double-mass spring damper system, and allows to independently tune their inertia, stiffness and damping properties. The control architecture is validated through an extensive simulation study using the 90kg HyQ robot equipped with a 7-DoF manipulator arm. Simulation results show the impedance rendering performance when external forces are applied at the arm's end-effector. The paper presents results for full stance condition (all legs on the ground) and, for the first time, also shows how the impedance rendering is affected by the contact conditions during a dynamic gait.

* Accepted as contributed paper at the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022) 
Viaarxiv icon

Fusing Visuo-Tactile Perception into Kernelized Synergies for Robust Grasping and Fine Manipulation of Non-rigid Objects

Sep 15, 2021
Sunny Katyara, Nikhil Deshpande, Fanny Ficuciello, Fei Chen, Bruno Siciliano, Darwin G. Caldwell

Figure 1 for Fusing Visuo-Tactile Perception into Kernelized Synergies for Robust Grasping and Fine Manipulation of Non-rigid Objects
Figure 2 for Fusing Visuo-Tactile Perception into Kernelized Synergies for Robust Grasping and Fine Manipulation of Non-rigid Objects
Figure 3 for Fusing Visuo-Tactile Perception into Kernelized Synergies for Robust Grasping and Fine Manipulation of Non-rigid Objects
Figure 4 for Fusing Visuo-Tactile Perception into Kernelized Synergies for Robust Grasping and Fine Manipulation of Non-rigid Objects

Handling non-rigid objects using robot hands necessities a framework that does not only incorporate human-level dexterity and cognition but also the multi-sensory information and system dynamics for robust and fine interactions. In this research, our previously developed kernelized synergies framework, inspired from human behaviour on reusing same subspace for grasping and manipulation, is augmented with visuo-tactile perception for autonomous and flexible adaptation to unknown objects. To detect objects and estimate their poses, a simplified visual pipeline using RANSAC algorithm with Euclidean clustering and SVM classifier is exploited. To modulate interaction efforts while grasping and manipulating non-rigid objects, the tactile feedback using T40S shokac chip sensor, generating 3D force information, is incorporated. Moreover, different kernel functions are examined in the kernelized synergies framework, to evaluate its performance and potential against task reproducibility, execution, generalization and synergistic re-usability. Experiments performed with robot arm-hand system validates the capability and usability of upgraded framework on stably grasping and dexterously manipulating the non-rigid objects.

* IEEE ICRA 2022 (under review) 
Viaarxiv icon

Formulating Intuitive Stack-of-Tasks with Visuo-Tactile Perception for Collaborative Human-Robot Fine Manipulation

Mar 09, 2021
Sunny Katyara, Fanny Ficuciello, Tao Teng, Fei Chen, Bruno Siciliano, Darwin G. Caldwell

Figure 1 for Formulating Intuitive Stack-of-Tasks with Visuo-Tactile Perception for Collaborative Human-Robot Fine Manipulation
Figure 2 for Formulating Intuitive Stack-of-Tasks with Visuo-Tactile Perception for Collaborative Human-Robot Fine Manipulation
Figure 3 for Formulating Intuitive Stack-of-Tasks with Visuo-Tactile Perception for Collaborative Human-Robot Fine Manipulation
Figure 4 for Formulating Intuitive Stack-of-Tasks with Visuo-Tactile Perception for Collaborative Human-Robot Fine Manipulation

Enabling robots to work in close proximity with humans necessitates to employ not only multi-sensory information for coordinated and autonomous interactions but also a control framework that ensures adaptive and flexible collaborative behavior. Such a control framework needs to integrate accuracy and repeatability of robots with cognitive ability and adaptability of humans for co-manipulation. In this regard, an intuitive stack of tasks (iSOT) formulation is proposed, that defines the robots actions based on human ergonomics and task progress. The framework is augmented with visuo-tactile perception for flexible interaction and autonomous adaption. The visual information using depth cameras, monitors and estimates the object pose and human arm gesture while the tactile feedback provides exploration skills for maintaining the desired contact to avoid slippage. Experiments conducted on robot system with human partnership for assembly and disassembly tasks confirm the effectiveness and usability of proposed framework.

* IROS2021 
Viaarxiv icon

Vision Based Adaptation to Kernelized Synergies for Human Inspired Robotic Manipulation

Dec 13, 2020
Sunny Katyara, Fanny Ficuciello, Fei Chen, Bruno Siciliano, Darwin G. Caldwell

Figure 1 for Vision Based Adaptation to Kernelized Synergies for Human Inspired Robotic Manipulation
Figure 2 for Vision Based Adaptation to Kernelized Synergies for Human Inspired Robotic Manipulation
Figure 3 for Vision Based Adaptation to Kernelized Synergies for Human Inspired Robotic Manipulation
Figure 4 for Vision Based Adaptation to Kernelized Synergies for Human Inspired Robotic Manipulation

Humans in contrast to robots are excellent in performing fine manipulation tasks owing to their remarkable dexterity and sensorimotor organization. Enabling robots to acquire such capabilities, necessitates a framework that not only replicates the human behaviour but also integrates the multi-sensory information for autonomous object interaction. To address such limitations, this research proposes to augment the previously developed kernelized synergies framework with visual perception to automatically adapt to the unknown objects. The kernelized synergies, inspired from humans, retain the same reduced subspace for object grasping and manipulation. To detect object in the scene, a simplified perception pipeline is used that leverages the RANSAC algorithm with Euclidean clustering and SVM for object segmentation and recognition respectively. Further, the comparative analysis of kernelized synergies with other state of art approaches is made to confirm their flexibility and effectiveness on the robotic manipulation tasks. The experiments conducted on the robot hand confirm the robustness of modified kernelized synergies framework against the uncertainties related to the perception of environment.

Viaarxiv icon

Reproducible Pruning System on Dynamic Natural Plants for Field Agricultural Robots

Aug 26, 2020
Sunny Katyara, Fanny Ficuciello, Darwin G. Caldwell, Fei Chen, Bruno Siciliano

Figure 1 for Reproducible Pruning System on Dynamic Natural Plants for Field Agricultural Robots
Figure 2 for Reproducible Pruning System on Dynamic Natural Plants for Field Agricultural Robots
Figure 3 for Reproducible Pruning System on Dynamic Natural Plants for Field Agricultural Robots
Figure 4 for Reproducible Pruning System on Dynamic Natural Plants for Field Agricultural Robots

Pruning is the art of cutting unwanted and unhealthy plant branches and is one of the difficult tasks in the field robotics. It becomes even more complex when the plant branches are moving. Moreover, the reproducibility of robot pruning skills is another challenge to deal with due to the heterogeneous nature of vines in the vineyard. This research proposes a multi-modal framework to deal with the dynamic vines with the aim of sim2real skill transfer. The 3D models of vines are constructed in blender engine and rendered in simulated environment as a need for training the robot. The Natural Admittance Controller (NAC) is applied to deal with the dynamics of vines. It uses force feedback and compensates the friction effects while maintaining the passivity of system. The faster R-CNN is used to detect the spurs on the vines and then statistical pattern recognition algorithm using K-means clustering is applied to find the effective pruning points. The proposed framework is tested in simulated and real environments.

* Under Review at SPAR 
Viaarxiv icon

Line Walking and Balancing for Legged Robots with Point Feet

Jul 02, 2020
Carlos Gonzalez, Victor Barasuol, Marco Frigerio, Roy Featherstone, Darwin G. Caldwell, Claudio Semini

Figure 1 for Line Walking and Balancing for Legged Robots with Point Feet
Figure 2 for Line Walking and Balancing for Legged Robots with Point Feet
Figure 3 for Line Walking and Balancing for Legged Robots with Point Feet
Figure 4 for Line Walking and Balancing for Legged Robots with Point Feet

The ability of legged systems to traverse highly-constrained environments depends by and large on the performance of their motion and balance controllers. This paper presents a controller that excels in a scenario that most state-of-the-art balance controllers have not yet addressed: line walking, or walking on nearly null support regions. Our approach uses a low-dimensional virtual model (2-DoF) to generate balancing actions through a previously derived four-term balance controller and transforms them to the robot through a derived kinematic mapping. The capabilities of this controller are tested in simulation, where we show the 90kg quadruped robot HyQ crossing a bridge of only 6 cm width (compared to its 4 cm diameter foot sphere), by balancing on two feet at any time while moving along a line. Additional simulations are carried to test the performance of the controller and the effect of external disturbances. The same controller is then used on the real robot to present for the first time a legged robot balancing on a contact line of nearly null support area.

Viaarxiv icon