This paper presents a Center of Mass (CoM) based manipulation and regrasp planner that implements stability constraints to preserve the robot balance. The planner provides a graph of IK-feasible, collision-free and stable motion sequences, constructed using an energy based motion planning algorithm. It assures that the assembly motions are stable and prevent the robot from falling while performing dexterous tasks in different situations. Furthermore, the constraints are also used to perform an RRT-inspired task-related stability estimation in several simulations. The estimation can be used to select between single-arm and dual-arm regrasping configurations to achieve more stability and robustness for a given manipulation task. To validate the planner and the task-related stability estimations, several tests are performed in simulations and real-world experiments involving the HRP5P humanoid robot, the 5th generation of the HRP robot family. The experiment results suggest that the planner and the task-related stability estimation provide robust behavior for the humanoid robot while performing regrasp tasks.
This paper proposes a novel robotic hand design for assembly tasks. The idea is to combine two simple grippers -- an inner gripper which is used for precise alignment, and an outer gripper which is used for stable holding. Conventional robotic hands require complicated compliant mechanisms or complicated control strategy and force sensing to conduct assemble tasks, which makes them costly and difficult to pick and arrange small objects like screws or washers. Compared to the conventional hands, the proposed design provides a low-cost solution for aligning, picking up, and arranging various objects by taking advantages of the geometric constraints of the positioning fingers and gravity. It is able to deal with small screws and washers, and eliminate the position errors of cylindrical objects or objects with cylindrical holes. In the experiments, both real-world tasks and quantitative analysis are performed to validate the aligning, picking, and arrangements abilities of the design.
This paper presents a double jaw hand for industrial assembly. The hand comprises two orthogonal parallel grippers with different mechanisms. The inner gripper is made of a crank-slider mechanism which is compact and able to firmly hold objects like shafts. The outer gripper is made of a parallelogram that has large stroke to hold big objects like pulleys. The two grippers are connected by a prismatic joint along the hand's approaching vector. The hand is able to hold two objects and perform in-hand manipulation like pull-in (insertion) and push-out (ejection). This paper presents the detailed design and implementation of the hand, and demonstrates the advantages by performing experiments on two sets of peg-in-multi-hole assembly tasks as parts of the World Robot Challenge (WRC) 2018 using a bimanual robot.
This paper proposes a novel assembly planner for a manipulator which can simultaneously plan assembly sequence, robot motion, grasping configuration, and exchange of grippers. Our assembly planner assumes multiple grippers and can automatically selects a feasible one to assemble a part. For a given AND/OR graph of an assembly task, we consider generating the assembly graph from which assembly motion of a robot can be planned. The edges of the assembly graph are composed of three kinds of paths, i.e., transfer/assembly paths, transit paths and tool exchange paths. In this paper, we first explain the proposed method for planning assembly motion sequence including the function of gripper exchange. Finally, the effectiveness of the proposed method is confirmed through some numerical examples and a physical experiment.
In this research, we tackle the problem of picking an object from randomly stacked pile. Since complex physical phenomena of contact among objects and fingers makes it difficult to perform the bin-picking with high success rate, we consider introducing a learning based approach. For the purpose of collecting enough number of training data within a reasonable period of time, we introduce a physics simulator where approximation is used for collision checking. In this paper, we first formulate the learning based robotic bin-picking by using CNN (Convolutional Neural Network). We also obtain the optimum grasping posture of parallel jaw gripper by using CNN. Finally, we show that the effect of approximation introduced in collision checking is relaxed if we use exact 3D model to generate the depth image of the pile as an input to CNN.
This paper shows experimental results on learning based randomized bin-picking combined with iterative visual recognition. We use the random forest to predict whether or not a robot will successfully pick an object for given depth images of the pile taking the collision between a finger and a neighboring object into account. For the discriminator to be accurate, we consider estimating objects' poses by merging multiple depth images of the pile captured from different points of view by using a depth sensor attached at the wrist. We show that, even if a robot is predicted to fail in picking an object with a single depth image due to its large occluded area, it is finally predicted as success after merging multiple depth images. In addition, we show that the random forest can be trained with the small number of training data.
Robot introspection, as opposed to anomaly detection typical in process monitoring, helps a robot understand what it is doing at all times. A robot should be able to identify its actions not only when failure or novelty occurs, but also as it executes any number of sub-tasks. As robots continue their quest of functioning in unstructured environments, it is imperative they understand what is it that they are actually doing to render them more robust. This work investigates the modeling ability of Bayesian nonparametric techniques on Markov Switching Process to learn complex dynamics typical in robot contact tasks. We study whether the Markov switching process, together with Bayesian priors can outperform the modeling ability of its counterparts: an HMM with Bayesian priors and without. The work was tested in a snap assembly task characterized by high elastic forces. The task consists of an insertion subtask with very complex dynamics. Our approach showed a stronger ability to generalize and was able to better model the subtask with complex dynamics in a computationally efficient way. The modeling technique is also used to learn a growing library of robot skills, one that when integrated with low-level control allows for robot online decision making.
Robotic failure is all too common in unstructured robot tasks. Despite well-designed controllers, robots often fail due to unexpected events. How do robots measure unexpected events? Many do not. Most robots are driven by the sense-plan act paradigm, however more recently robots are undergoing a sense-plan-act-verify paradigm. In this work, we present a principled methodology to bootstrap online robot introspection for contact tasks. In effect, we are trying to enable the robot to answer the question: what did I do? Is my behavior as expected or not? To this end, we analyze noisy wrench data and postulate that the latter inherently contains patterns that can be effectively represented by a vocabulary. The vocabulary is generated by segmenting and encoding the data. When the wrench information represents a sequence of sub-tasks, we can think of the vocabulary forming a sentence (set of words with grammar rules) for a given sub-task; allowing the latter to be uniquely represented. The grammar, which can also include unexpected events, was classified in offline and online scenarios as well as for simulated and real robot experiments. Multiclass Support Vector Machines (SVMs) were used offline, while online probabilistic SVMs were are used to give temporal confidence to the introspection result. The contribution of our work is the presentation of a generalizable online semantic scheme that enables a robot to understand its high-level state whether nominal or abnormal. It is shown to work in offline and online scenarios for a particularly challenging contact task: snap assemblies. We perform the snap assembly in one-arm simulated and real one-arm experiments and a simulated two-arm experiment. This verification mechanism can be used by high-level planners or reasoning systems to enable intelligent failure recovery or determine the next most optima manipulation skill to be used.
This paper develops intelligent algorithms for robots to reorient objects. Given the initial and goal poses of an object, the proposed algorithms plan a sequence of robot poses and grasp configurations that reorient the object from its initial pose to the goal. While the topic has been studied extensively in previous work, this paper makes important improvements in grasp planning by using over-segmented meshes, in data storage by using relational database, and in regrasp planning by mixing real-world roadmaps. The improvements enable robots to do robust regrasp planning using 10,000s of grasps and their relationships in interactive time. The proposed algorithms are validated using various objects and robots.
Robotic failure is all too common in unstructured robot tasks. Despite well designed controllers, robots often fail due to unexpected events. How do robots measure unexpected events? Many do not. Most robots are driven by the senseplan- act paradigm, however more recently robots are working with a sense-plan-act-verify paradigm. In this work we present a principled methodology to bootstrap robot introspection for contact tasks. In effect, we are trying to answer the question, what did the robot do? To this end, we hypothesize that all noisy wrench data inherently contains patterns that can be effectively represented by a vocabulary. The vocabulary is generated by meaningfully segmenting the data and then encoding it. When the wrench information represents a sequence of sub-tasks, we can think of the vocabulary forming sets of words or sentences, such that each subtask is uniquely represented by a word set. Such sets can be classified using statistical or machine learning techniques. We use SVMs and Mondrian Forests to classify contacts tasks both in simulation and in real robots for one and dual arm scenarios showing the general robustness of the approach. The contribution of our work is the presentation of a simple but generalizable semantic scheme that enables a robot to understand its high level state. This verification mechanism can provide feedback for high-level planners or reasoning systems that use semantic descriptors as well. The code, data, and other supporting documentation can be found at: http://www.juanrojas.net/2017icra_wrench_introspection.