This paper investigates learning effects and human operator training practices in variable autonomy robotic systems. These factors are known to affect performance of a human-robot system and are frequently overlooked. We present the results from an experiment inspired by a search and rescue scenario in which operators remotely controlled a mobile robot with either Human-Initiative (HI) or Mixed-Initiative (MI) control. Evidence suggests learning in terms of primary navigation task and secondary (distractor) task performance. Further evidence is provided that MI and HI performance in a pure navigation task is equal. Lastly, guidelines are proposed for experimental design and operator training practices.
Robotic cutting, or milling, plays a significant role in applications such as disassembly, decommissioning, and demolition. Planning and control of cutting in real-world scenarios in uncertain environments is a complex task, with the potential to benefit from simulated training environments. This letter focuses on sim-to-real transfer for robotic cutting policies, addressing the need for effective policy transfer from simulation to practical implementation. We extend our previous domain generalisation approach to learning cutting tasks based on a mechanistic model-based simulation framework, by proposing a hybrid approach for sim-to-real transfer based on a milling process force model and residual Gaussian process (GP) force model, learned from either single or multiple real-world cutting force examples. We demonstrate successful sim-to-real transfer of a robotic cutting policy without the need for fine-tuning on the real robot setup. The proposed approach autonomously adapts to materials with differing structural and mechanical properties. Furthermore, we demonstrate the proposed method outperforms fine-tuning or re-training alone.
This paper presents an assisted telemanipulation framework for reaching and grasping desired objects from clutter. Specifically, the developed system allows an operator to select an object from a cluttered heap and effortlessly grasp it, with the system assisting in selecting the best grasp and guiding the operator to reach it. To this end, we propose an object pose estimation scheme, a dynamic grasp re-ranking strategy, and a reach-to-grasp hybrid force/position trajectory guidance controller. We integrate them, along with our previous SpectGRASP grasp planner, into a classical bilateral teleoperation system that allows to control the robot using a haptic device while providing force feedback to the operator. For a user-selected object, our system first identifies the object in the heap and estimates its full six degrees of freedom (DoF) pose. Then, SpectGRASP generates a set of ordered, collision-free grasps for this object. Based on the current location of the robot gripper, the proposed grasp re-ranking strategy dynamically updates the best grasp. In assisted mode, the hybrid controller generates a zero force-torque path along the reach-to-grasp trajectory while automatically controlling the orientation of the robot. We conducted real-world experiments using a haptic device and a 7-DoF cobot with a 2-finger gripper to validate individual components of our telemanipulation system and its overall functionality. Obtained results demonstrate the effectiveness of our system in assisting humans to clear cluttered scenes.
Autonomous robots operating in real-world environments encounter a variety of objects that can be both rigid and articulated in nature. Having knowledge of these specific object properties not only helps in designing appropriate manipulation strategies but also aids in developing reliable tracking and pose estimation techniques for many robotic and vision applications. In this context, this paper presents a registration-based local region-to-region mapping approach to classify an object as either articulated or rigid. Using the point clouds of the intended object, the proposed method performs classification by estimating unique local transformations between point clouds over the observed sequence of movements of the object. The significant advantage of the proposed method is that it is a constraint-free approach that can classify any articulated object and is not limited to a specific type of articulation. Additionally, it is a model-free approach with no learning components, which means it can classify whether an object is articulated without requiring any object models or labelled data. We analyze the performance of the proposed method on two publicly available benchmark datasets with a combination of articulated and rigid objects. It is observed that the proposed method can classify articulated and rigid objects with good accuracy.
In applications that involve human-robot interaction (HRI), human-robot teaming (HRT), and cooperative human-machine systems, the inference of the human partner's intent is of critical importance. This paper presents a method for the inference of the human operator's navigational intent, in the context of mobile robots that provide full or partial (e.g., shared control) teleoperation. We propose the Machine Learning Operator Intent Inference (MLOII) method, which a) processes spatial data collected by the robot's sensors; b) utilizes a supervised machine learning algorithm to estimate the operator's most probable navigational goal online. The proposed method's ability to reliably and efficiently infer the intent of the human operator is experimentally evaluated in realistically simulated exploration and remote inspection scenarios. The results in terms of accuracy and uncertainty indicate that the proposed method is comparable to another state-of-the-art method found in the literature.
Disassembly of electric vehicle batteries is a critical stage in recovery, recycling and re-use of high-value battery materials, but is complicated by limited standardisation, design complexity, compounded by uncertainty and safety issues from varying end-of-life condition. Telerobotics presents an avenue for semi-autonomous robotic disassembly that addresses these challenges. However, it is suggested that quality and realism of the user's haptic interactions with the environment is important for precise, contact-rich and safety-critical tasks. To investigate this proposition, we demonstrate the disassembly of a Nissan Leaf 2011 module stack as a basis for a comparative study between a traditional asymmetric haptic-'cobot' master-slave framework and identical master and slave cobots based on task completion time and success rate metrics. We demonstrate across a range of disassembly tasks a time reduction of 22%-57% is achieved using identical cobots, yet this improvement arises chiefly from an expanded workspace and 1:1 positional mapping, and suffers a 10-30% reduction in first attempt success rate. For unbolting and grasping, the realism of force feedback was comparatively less important than directional information encoded in the interaction, however, 1:1 force mapping strengthened environmental tactile cues for vacuum pick-and-place and contact cutting tasks.
This paper addresses the problem of robotic cutting during disassembly of products for materials separation and recycling. Waste handling applications differ from milling in manufacturing processes, as they engender considerable variety and uncertainty in the parameters (e.g. hardness) of materials which the robot must cut. To address this challenge, we propose a learning-based approach incorporating elements of interaction control, in which the robot can adapt key parameters, such as feed rate, depth of cut, and mechanical compliance during task execution. We show how a mathematical model of cutting mechanics, embedded in a simulation environment, can be used to rapidly train the system without needing large amounts of data from physical cutting trials. The simulation approach was validated on a real robot setup based on four case study materials with varying structural and mechanical properties. We demonstrate the proposed method minimises process force and path deviations to a level similar to offline optimal planning methods, while the average time to complete a cutting task is within 25% of the optimum, at the expense of reduced volume of material removed per pass. A key advantage of our approach over similar works is that no prior knowledge about the material is required.
This paper presents a direct 3D visual servo scheme for the automatic alignment of point clouds (respectively, objects) using visual information in the spectral domain. Specifically, we propose an alignment method for 3D models/point clouds that works by estimating the global transformation between a reference point cloud and a target point cloud using harmonic domain data analysis. A 3D discrete Fourier transform (DFT) in $\mathbb{R}^3$ is used for translation estimation and real spherical harmonics in $SO(3)$ are used for rotation estimation. This approach allows us to derive a decoupled visual servo controller with 6 degrees of freedom. We then show how this approach can be used as a controller for a robotic arm to perform a positioning task. Unlike existing 3D visual servo methods, our method works well with partial point clouds and in cases of large initial transformations between the initial and desired position. Additionally, using spectral data (instead of spatial data) for the transformation estimation makes our method robust to sensor-induced noise and partial occlusions. Our method has been successfully validated experimentally on point clouds obtained with a depth camera mounted on a robotic arm.
This paper presents a spectral domain registration-based visual servoing scheme that works on 3D point clouds. Specifically, we propose a 3D model/point cloud alignment method, which works by finding a global transformation between reference and target point clouds using spectral analysis. A 3D Fast Fourier Transform (FFT) in R3 is used for the translation estimation, and the real spherical harmonics in SO(3) are used for the rotations estimation. Such an approach allows us to derive a decoupled 6 degrees of freedom (DoF) controller, where we use gradient ascent optimisation to minimise translation and rotational costs. We then show how this methodology can be used to regulate a robot arm to perform a positioning task. In contrast to the existing state-of-the-art depth-based visual servoing methods that either require dense depth maps or dense point clouds, our method works well with partial point clouds and can effectively handle larger transformations between the reference and the target positions. Furthermore, the use of spectral data (instead of spatial data) for transformation estimation makes our method robust to sensor-induced noise and partial occlusions. We validate our approach by performing experiments using point clouds acquired by a robot-mounted depth camera. Obtained results demonstrate the effectiveness of our visual servoing approach.
Using different Levels of Autonomy (LoA), a human operator can vary the extent of control they have over a robot's actions. LoAs enable operators to mitigate a robot's performance degradation or limitations in the its autonomous capabilities. However, LoA regulation and other tasks may often overload an operator's cognitive abilities. Inspired by video game user interfaces, we study if adding a 'Robot Health Bar' to the robot control UI can reduce the cognitive demand and perceptual effort required for LoA regulation while promoting trust and transparency. This Health Bar uses the robot vitals and robot health framework to quantify and present runtime performance degradation in robots. Results from our pilot study indicate that when using a health bar, operators used to manual control more to minimise the risk of robot failure during high performance degradation. It also gave us insights and lessons to inform subsequent experiments on human-robot teaming.