This paper develops a robotic manipulation planner for human-robot collaborative assembly. Unlike previous methods which study an independent and fully AI-equipped autonomous system, this paper explores the subtask distribution between a robot and a human and studies a human-in-the-loop robotic system for collaborative assembly. The system distributes the subtasks of an assembly to robots and humans by exploiting their advantages and avoiding their disadvantages. The robot in the system will work on pick-and-place tasks and provide workpieces to humans. The human collaborator will work on fine operations like aligning, fixing, screwing, etc. A constraint based incremental manipulation planning method is proposed to generate the motion for the robots. The performance of the proposed system is demonstrated by asking a human and the dual-arm robot to collaboratively assemble a cabinet. The results showed that the proposed system and planner are effective, efficient, and can assist humans in finishing the assembly task comfortably.
In this paper, we present a planner for manipulating tethered tools using dual-armed robots. The planner generates robot motion sequences to maneuver a tool and its cable while avoiding robot-cable entanglements. Firstly, the planner generates an Object Manipulation Motion Sequence (OMMS) to handle the tool and place it in desired poses. Secondly, the planner examines the tool movement associated with the OMMS and computes candidate positions for a cable slider, to maneuver the tool cable and avoid collisions. Finally, the planner determines the optimal slider positions to avoid entanglements and generates a Cable Manipulation Motion Sequence (CMMS) to place the slider in these positions. The robot executes both the OMMS and CMMS to handle the tool and its cable to avoid entanglements and excess cable bending. Simulations and real-world experiments help validate the proposed method.
This paper proposes a combined task and motion planner for a dual-arm robot to use a suction cup tool. The planner consists of three sub-planners -- A suction pose sub-planner and two regrasp and motion sub-planners. The suction pose sub-planner finds all the available poses for a suction cup tool to suck on the object, using the models of the tool and the object. The regrasp and motion sub-planner builds the regrasp graph that represents all possible grasp sequences to reorient and move the suction cup tool from an initial pose to a goal pose. Two regrasp graphs are used to plan for a single suction cup and the complex of the suction cup and an object respectively. The output of the proposed planner is a sequence of robot motion that uses a suction cup tool to manipulate objects following human instructions. The planner is examined and analyzed by both simulation experiments and real-world executions using several real-world tasks. The results show that the planner is efficient, robust, and can generate sequential transit and transfer robot motion to finish complicated combined task and motion planning tasks in a few seconds.
This paper develops model-based grasp planning algorithms for assembly tasks. It focuses on industrial end-effectors like grippers and suction cups, and plans grasp configurations considering CAD models of target objects. The developed algorithms are able to stably plan a large number of high-quality grasps, with high precision and little dependency on the quality of CAD models. The undergoing core technique is superimposed segmentation, which pre-processes a mesh model by peeling it into facets. The algorithms use superimposed segments to locate contact points and parallel facets, and synthesize grasp poses for popular industrial end-effectors. Several tunable parameters were prepared to adapt the algorithms to meet various requirements. The experimental section demonstrates the advantages of the algorithms by analyzing the cost and stability of the algorithms, the precision of the planned grasps, and the tunable parameters with both simulations and real-world experiments. Also, some examples of robotic assembly systems using the proposed algorithms are presented to demonstrate the efficacy.
Planning dual-arm assembly of more than three objects is a challenging Task and Motion Planning (TAMP) problem. The assembly planner shall consider not only the pose constraints of objects and robots, but also the gravitational constraints that may break the finished part. This paper proposes a planner to plan the dual-arm assembly of more than three objects. It automatically generates the grasp configurations and assembly poses, and simultaneously searches and backtracks the grasp space and assembly space to accelerate the motion planning of robot arms. Meanwhile, the proposed method considers gravitational constraints during robot motion planning to avoid breaking the finished part. In the experiments and analysis section, the time cost of each process and the influence of different parameters used in the proposed planner are compared and analyzed. The optimal values are used to perform real-world executions of various robotic assembly tasks. The planner is proved to be robust and efficient through the experiments.
This paper uses robots to assemble pegs into holes on surfaces with different colors and textures. It especially targets at the problem of peg-in-hole assembly with initial position uncertainty. Two in-hand cameras and a force-torque sensor are used to account for the position uncertainty. A program sequence comprising learning-based visual servoing, spiral search, and impedance control is implemented to perform the peg-in-hole task with feedback from the above sensors. Contributions are mainly made in the learning-based visual servoing of the sequence, where a deep neural network is trained with various sets of synthetic data generated using the concept of domain randomization to predict where a hole is. In the experiments and analysis section, the network is analyzed and compared, and a real-world robotic system to insert pegs to holes using the proposed method is implemented. The results show that the implemented peg-in-hole assembly system can perform successful peg-in-hole insertions on surfaces with various colors and textures. It can generally speed up the entire peg-in-hole process.
This work designs a mechanical tool for robots with 2-finger parallel grippers, which extends the function of the robotic gripper without additional requirements on tool exchangers or other actuators. The fundamental kinematic structure of the mechanical tool is two symmetric parallelograms which transmit the motion of the robotic gripper to the mechanical tool. Four torsion springs are attached to the four inner joints of the two parallelograms to open the tool as the robotic gripper releases. The forces and transmission are analyzed in detail to make sure the tool reacts well with respect to the gripping forces and the spring stiffness. Also, based on the kinematic structure, variety tooltips were designed for the mechanical tool to perform various tasks. The kinematic structure can be a platform to apply various skillful gripper designs. The designed tool could be treated as a normal object and be picked up and used by automatically planned grasps. A robot may locate the tool through the AR markers attached to the tool body, grasp the tool by selecting an automatically planned grasp, and move the tool from any arbitrary pose to a specific pose to grip objects. The robot may also determine the optimal grasps and usage according to the requirements of given tasks.
Robotic manipulation of tethered tools is widely seen in robotic work cells. They may cause excess strain on the tool's cable or undesired entanglements with the robot's arms. This paper presents a manipulation planner with cable orientation constraints for tethered tools suspended by tool balancers. The planner uses orientation constraints to limit the bending of the balancer's cable while the robot manipulates a tool and places it in a desired pose. The constraints reduce entanglements and decrease the torque induced by the cable on the robot joints. Simulation and real-world experiments show that the constrained planner can successfully plan robot motions for the manipulation of suspended tethered tools preventing the robot from damaging the cable or getting its arms entangled, potentially avoiding accidents. The planner is expected to play promising roles in manufacturing cells.
This paper presents a manipulation planning algorithm for robots to reorient objects. It automatically finds a sequence of robot motion that manipulates and prepares an object for specific tasks. Examples of the preparatory manipulation planning problems include reorienting an electric drill to cut holes, reorienting workpieces for assembly, and reorienting cargo for packing, etc. The proposed algorithm could plan single and dual arm manipulation sequences to solve the problems. The mechanism under the planner is a regrasp graph which encodes grasp configurations and object poses. The algorithms search the graph to find a sequence of robot motion to reorient objects. The planner is able to plan both single and dual arm manipulation. It could also automatically determine whether to use a single arm, dual arms, or their combinations to finish given tasks. The planner is examined by various humanoid robots like Nextage, HRP2Kai, HRP5P, etc., using both simulation and real-world experiments.
We present bilateral teleoperation system for task learning and robot motion generation. Our system includes a bilateral teleoperation platform and a deep learning software. The deep learning software refers to human demonstration using the bilateral teleoperation platform to collect visual images and robotic encoder values. It leverages the datasets of images and robotic encoder information to learn about the inter-modal correspondence between visual images and robot motion. In detail, the deep learning software uses a combination of Deep Convolutional Auto-Encoders (DCAE) over image regions, and Recurrent Neural Network with Long Short-Term Memory units (LSTM-RNN) over robot motor angles, to learn motion taught be human teleoperation. The learnt models are used to predict new motion trajectories for similar tasks. Experimental results show that our system has the adaptivity to generate motion for similar scooping tasks. Detailed analysis is performed based on failure cases of the experimental results. Some insights about the cans and cannots of the system are summarized.