Technical University Munich
Abstract:Robotic assembly tasks are open challenges due to the long task horizon and complex part relations. Behavior trees (BTs) are increasingly used in robot task planning for their modularity and flexibility, but manually designing them can be effort-intensive. Large language models (LLMs) have recently been applied in robotic task planning for generating action sequences, but their ability to generate BTs has not been fully investigated. To this end, We propose LLM as BT-planner, a novel framework to leverage LLMs for BT generation in robotic assembly task planning and execution. Four in-context learning methods are introduced to utilize the natural language processing and inference capabilities of LLMs to produce task plans in BT format, reducing manual effort and ensuring robustness and comprehensibility. We also evaluate the performance of fine-tuned, fewer-parameter LLMs on the same tasks. Experiments in simulated and real-world settings show that our framework enhances LLMs' performance in BT generation, improving success rates in BT generation through in-context learning and supervised fine-tuning.
Abstract:In this work, we propose an LLM-based BT generation framework to leverage the strengths of both for sequential manipulation planning. To enable human-robot collaborative task planning and enhance intuitive robot programming by nonexperts, the framework takes human instructions to initiate the generation of action sequences and human feedback to refine BT generation in runtime. All presented methods within the framework are tested on a real robotic assembly example, which uses a gear set model from the Siemens Robot Assembly Challenge. We use a single manipulator with a tool-changing mechanism, a common practice in flexible manufacturing, to facilitate robust grasping of a large variety of objects. Experimental results are evaluated regarding success rate, logical coherence, executability, time consumption, and token consumption. To our knowledge, this is the first human-guided LLM-based BT generation framework that unifies various plausible ways of using LLMs to fully generate BTs that are executable on the real testbed and take into account granular knowledge of tool use.
Abstract:Markerless motion capture devices such as the Leap Motion Controller (LMC) have been extensively used for tracking hand, wrist, and forearm positions as an alternative to Marker-based Motion Capture (MMC). However, previous studies have highlighted the subpar performance of LMC in reliably recording hand kinematics. In this study, we employ four LMC devices to optimize their collective tracking volume, aiming to enhance the accuracy and precision of hand kinematics. Through Monte Carlo simulation, we determine an optimized layout for the four LMC devices and subsequently conduct reliability and validity experiments encompassing 1560 trials across ten subjects. The combined tracking volume is validated against an MMC system, particularly for kinematic movements involving wrist, index, and thumb flexion. Utilizing calculation resources in one computer, our result of the optimized configuration has a better visibility rate with a value of 0.05 $\pm$ 0.55 compared to the initial configuration with -0.07 $\pm$ 0.40. Multiple Leap Motion Controllers (LMCs) have proven to increase the interaction space of capture volume but are still unable to give agreeable measurements from dynamic movement.
Abstract:Non-prehensile object transportation offers a way to enhance robotic performance in object manipulation tasks, especially with unstable objects. Effective trajectory planning requires simultaneous consideration of robot motion constraints and object stability. Here, we introduce a physical model for object stability and propose a novel trajectory planning approach for non-prehensile transportation along arbitrary straight lines in 3D space. Validation with a 7-DoF Franka Panda robot confirms improved transportation speed via tray rotation integration while ensuring object stability and robot motion constraints.
Abstract:Despite recent advancements in torque-controlled tactile robots, integrating them into manufacturing settings remains challenging, particularly in complex environments. Simplifying robotic skill programming for non-experts is crucial for increasing robot deployment in manufacturing. This work proposes an innovative approach, Vision-Augmented Unified Force-Impedance Control (VA-UFIC), aimed at intuitive visuo-tactile exploration of unknown 3D curvatures. VA-UFIC stands out by seamlessly integrating vision and tactile data, enabling the exploration of diverse contact shapes in three dimensions, including point contacts, flat contacts with concave and convex curvatures, and scenarios involving contact loss. A pivotal component of our method is a robust online contact alignment monitoring system that considers tactile error, local surface curvature, and orientation, facilitating adaptive adjustments of robot stiffness and force regulation during exploration. We introduce virtual energy tanks within the control framework to ensure safety and stability, effectively addressing inherent safety concerns in visuo-tactile exploration. Evaluation using a Franka Emika research robot demonstrates the efficacy of VA-UFIC in exploring unknown 3D curvatures while adhering to arbitrarily defined force-motion policies. By seamlessly integrating vision and tactile sensing, VA-UFIC offers a promising avenue for intuitive exploration of complex environments, with potential applications spanning manufacturing, inspection, and beyond.
Abstract:Prosthetic limb abandonment remains an unsolved challenge as amputees consistently reject their devices. Current prosthetic designs often fail to balance human-like perfomance with acceptable device weight, highlighting the need for optimised designs tailored to modern tasks. This study aims to provide a comprehensive dataset of joint kinematics and kinetics essential for performing activities of daily living (ADL), thereby informing the design of more functional and user-friendly prosthetic devices. Functionally required Ranges of Motion (ROM), velocities, and torques for the Glenohumeral (rotation), elbow, Radioulnar, and wrist joints were computed using motion capture data from 12 subjects performing 24 ADLs. Our approach included the computation of joint torques for varying mass and inertia properties of the upper limb, while torques induced by the manipulation of experimental objects were considered by their interaction wrench with the subjects hand. Joint torques pertaining to individual ADL scaled linearly with limb and object mass and mass distribution, permitting their generalisation to not explicitly simulated limb and object dynamics with linear regressors (LRM), exhibiting coefficients of determination R = 0.99 pm 0.01. Exemplifying an application of data-driven prosthesis design, we optimise wrist axes orientations for two serial and two differential joint configurations. Optimised axes reduced peak power requirements, between 22 to 38 percent compared to anatomical configurations, by exploiting high torque correlations between Ulnar deviation and wrist flexion/extension joints. This study offers critical insights into the functional requirements of upper limb prostheses, providing a valuable foundation for data-driven prosthetic design that addresses key user concerns and enhances device adoption.
Abstract:Robotic manipulation is essential for modernizing factories and automating industrial tasks like polishing, which require advanced tactile abilities. These robots must be easily set up, safely work with humans, learn tasks autonomously, and transfer skills to similar tasks. Addressing these needs, we introduce the tactile-morph skill framework, which integrates unified force-impedance control with data-driven learning. Our system adjusts robot movements and force application based on estimated energy levels for the desired trajectory and force profile, ensuring safety by stopping if energy allocated for the control runs out. Using a Temporal Convolutional Network, we estimate the energy distribution for a given motion and force profile, enabling skill transfer across different tasks and surfaces. Our approach maintains stability and performance even on unfamiliar geometries with similar friction characteristics, demonstrating improved accuracy, zero-shot transferable performance, and enhanced safety in real-world scenarios. This framework promises to enhance robotic capabilities in industrial settings, making intelligent robots more accessible and valuable.
Abstract:This study addresses the absence of an identification framework to quantify a comprehensive dynamic model of human and anthropomorphic tendon-driven fingers, which is necessary to investigate the physiological properties of human fingers and improve the control of robotic hands. First, a generalized dynamic model was formulated, which takes into account the inherent properties of such a mechanical system. This includes rigid-body dynamics, coupling matrix, joint viscoelasticity, and tendon friction. Then, we propose a methodology comprising a series of experiments, for step-wise identification and validation of this dynamic model. Moreover, an experimental setup was designed and constructed that features actuation modules and peripheral sensors to facilitate the identification process. To verify the proposed methodology, a 3D-printed robotic finger based on the index finger design of the Dexmart hand was developed, and the proposed experiments were executed to identify and validate its dynamic model. This study could be extended to explore the identification of cadaver hands, aiming for a consistent dataset from a single cadaver specimen to improve the development of musculoskeletal hand models.
Abstract:This study addresses the critical need for diverse and comprehensive data focused on human arm joint torques while performing activities of daily living (ADL). Previous studies have often overlooked the influence of objects on joint torques during ADL, resulting in limited datasets for analysis. To address this gap, we propose an Object Augmentation Algorithm (OAA) capable of augmenting existing marker-based databases with virtual object motions and object-induced joint torque estimations. The OAA consists of five phases: (1) computing hand coordinate systems from optical markers, (2) characterising object movements with virtual markers, (3) calculating object motions through inverse kinematics (IK), (4) determining the wrench necessary for prescribed object motion using inverse dynamics (ID), and (5) computing joint torques resulting from object manipulation. The algorithm's accuracy is validated through trajectory tracking and torque analysis on a 7+4 degree of freedom (DoF) robotic hand-arm system, manipulating three unique objects. The results show that the OAA can accurately and precisely estimate 6 DoF object motion and object-induced joint torques. Correlations between computed and measured quantities were > 0.99 for object trajectories and > 0.93 for joint torques. The OAA was further shown to be robust to variations in the number and placement of input markers, which are expected between databases. Differences between repeated experiments were minor but significant (p < 0.05). The algorithm expands the scope of available data and facilitates more comprehensive analyses of human-object interaction dynamics.
Abstract:Safety for physical human-robot interaction (pHRI) is a major concern for all application domains. While current standardization for industrial robot applications provide safety constraints that address the onset of pain in blunt impacts, these impact thresholds are difficult to use on edged or pointed impactors. The most severe injuries occur in constrained contact scenarios, where crushing is possible. Nevertheless, situations potentially resulting in constrained contact only occur in certain areas of a workspace and design or organisational approaches can be used to avoid them. What remains are risks to the human physical integrity caused by unconstrained accidental contacts, which are difficult to avoid while maintaining robot motion efficiency. Nevertheless, the probability and severity of injuries occurring with edged or pointed impacting objects in unconstrained collisions is hardly researched. In this paper, we propose an experimental setup and procedure using two pendulums modeling human hands and arms and robots to understand the injury potential of unconstrained collisions of human hands with edged objects. Based on our previous studies, we use pig feet as ex vivo surrogate samples - as these closely resemble the physiological characteristics of human hands - to create an initial injury database on the severity of injuries caused by unconstrained edged or pointed impacts. The use of such experimental setups and procedures in addition to other research on the occurrence of injuries in humans will eventually lead to a complete understanding of the biomechanical injury potential in pHRI.