Alert button
Picture for Glebys Gonzalez

Glebys Gonzalez

Alert button

Pose Imitation Constraints for Collaborative Robots

Oct 12, 2020
Glebys Gonzalez, Juan Wachs

Figure 1 for Pose Imitation Constraints for Collaborative Robots
Figure 2 for Pose Imitation Constraints for Collaborative Robots
Figure 3 for Pose Imitation Constraints for Collaborative Robots
Figure 4 for Pose Imitation Constraints for Collaborative Robots

Achieving human-like motion in robots has been a fundamental goal in many areas of robotics research. Inverse kinematic (IK) solvers have been explored as a solution to provide kinematic structures with anthropomorphic movements. In particular, numeric solvers based on geometry, such as FABRIK, have shown potential for producing human-like motion at a low computational cost. Nevertheless, these methods have shown limitations when solving for robot kinematic constraints. This work proposes a framework inspired by FABRIK for human pose imitation in real-time. The goal is to mitigate the problems of the original algorithm while retaining the resulting humanlike fluidity and low cost. We first propose a human constraint model for pose imitation. Then, we present a pose imitation algorithm (PIC), and it's soft version (PICs) that can successfully imitate human poses using the proposed constraint system. PIC was tested on two collaborative robots (Baxter and YuMi). Fifty human demonstrations were collected for a bi-manual assembly and an incision task. Then, two performance metrics were obtained for both robots: pose accuracy with respect to the human and the percentage of environment occlusion/obstruction. The performance of PIC and PICs was compared against the numerical solver baseline (FABRIK). The proposed algorithms achieve a higher pose accuracy than FABRIK for both tasks (25%-FABRIK, 53%-PICs, 58%-PICs). In addition, PIC and it's soft version achieve a lower percentage of occlusion during incision (10%-FABRIK, 4%-PICs, 9%-PICs). These results indicate that the PIC method can reproduce human poses and achieve key desired effects of human imitation.

* 9 pages, 8 figures, 3 tables 
Viaarxiv icon

PICs for TECH: Pose Imitation Constraints (PICs) for TEaching Collaborative Heterogeneous robots (TECH)

Sep 23, 2020
Glebys Gonzalez, Juan Wachs

Figure 1 for PICs for TECH: Pose Imitation Constraints (PICs) for TEaching Collaborative Heterogeneous robots (TECH)
Figure 2 for PICs for TECH: Pose Imitation Constraints (PICs) for TEaching Collaborative Heterogeneous robots (TECH)
Figure 3 for PICs for TECH: Pose Imitation Constraints (PICs) for TEaching Collaborative Heterogeneous robots (TECH)
Figure 4 for PICs for TECH: Pose Imitation Constraints (PICs) for TEaching Collaborative Heterogeneous robots (TECH)

Achieving human-like motion in robots has been a fundamental goal in many areas of robotics research. Inverse kinematic (IK) solvers have been explored as a solution to provide kinematic structures with anthropomorphic movements. In particular, numeric solvers based on geometry, such as FABRIK, have shown potential for producing human-like motion at a low computational cost. Nevertheless, these methods have shown limitations when solving for robot kinematic constraints. This work proposes a framework inspired by FABRIK for human pose imitation in real-time. The goal is to mitigate the problems of the original algorithm while retaining the resulting human-like fluidity and low cost. We first propose a human constraint model for pose imitation. Then, we present a pose imitation algorithm (PIC), and its soft version (PICs) that can successfully imitate human poses using the proposed constraint system. PIC was tested on two collaborative robots (Baxter and YuMi). Fifty human demonstrations were collected for a bi-manual assembly and an incision task. Then, two performance metrics were obtained for both robots: pose accuracy with respect to the human and the percentage of environment occlusion/obstruction. The performance of PIC and PICs was compared against the numerical solver baseline (FABRIK). The proposed algorithms achieve a higher pose accuracy than FABRIK for both tasks (0.25- FABRIK, 0.53-PICs, 0.58-PICs). In addition, PIC and its soft version achieve a lower percentage of occlusion during incision (0.10-FABRIK, 0.04-PICs, 0.09-PICs) and a lower percentage of obstruction during assembly (0.09-FABRIK, 0.08-PICs, 0.07- PICs). These results show that PIC can both efficiently reproduce human poses and achieve key desired effects of human imitation.

* 9 pages, 7 figures, 3 tables 
Viaarxiv icon

DESK: A Robotic Activity Dataset for Dexterous Surgical Skills Transfer to Medical Robots

Mar 03, 2019
Naveen Madapana, Md Masudur Rahman, Natalia Sanchez-Tamayo, Mythra V. Balakuntala, Glebys Gonzalez, Jyothsna Padmakumar Bindu, L. N. Vishnunandan Venkatesh, Xingguang Zhang, Juan Barragan Noguera, Thomas Low, Richard Voyles, Yexiang Xue, Juan Wachs

Figure 1 for DESK: A Robotic Activity Dataset for Dexterous Surgical Skills Transfer to Medical Robots
Figure 2 for DESK: A Robotic Activity Dataset for Dexterous Surgical Skills Transfer to Medical Robots
Figure 3 for DESK: A Robotic Activity Dataset for Dexterous Surgical Skills Transfer to Medical Robots
Figure 4 for DESK: A Robotic Activity Dataset for Dexterous Surgical Skills Transfer to Medical Robots

Datasets are an essential component for training effective machine learning models. In particular, surgical robotic datasets have been key to many advances in semi-autonomous surgeries, skill assessment, and training. Simulated surgical environments can enhance the data collection process by making it faster, simpler and cheaper than real systems. In addition, combining data from multiple robotic domains can provide rich and diverse training data for transfer learning algorithms. In this paper, we present the DESK (Dexterous Surgical Skill) dataset. It comprises a set of surgical robotic skills collected during a surgical training task using three robotic platforms: the Taurus II robot, Taurus II simulated robot, and the YuMi robot. This dataset was used to test the idea of transferring knowledge across different domains (e.g. from Taurus to YuMi robot) for a surgical gesture classification task with seven gestures. We explored three different scenarios: 1) No transfer, 2) Transfer from simulated Taurus to real Taurus and 3) Transfer from Simulated Taurus to the YuMi robot. We conducted extensive experiments with three supervised learning models and provided baselines in each of these scenarios. Results show that using simulation data during training enhances the performance on the real robot where limited real data is available. In particular, we obtained an accuracy of 55% on the real Taurus data using a model that is trained only on the simulator data. Furthermore, we achieved an accuracy improvement of 34% when 3% of the real data is added into the training process.

* 8 pages, 5 figures, 4 tables, submitted to IROS 2019 conference 
Viaarxiv icon