In robotic deformable object manipulation (DOM) applications, constraints arise commonly from environments and task-specific requirements. Enabling DOM with constraints is therefore crucial for its deployment in practice. However, dealing with constraints turns out to be challenging due to many inherent factors such as inaccessible deformation models of deformable objects (DOs) and varying environmental setups. This article presents a systematic manipulation framework for DOM subject to constraints by proposing a novel path set planning and tracking scheme. First, constrained DOM tasks are formulated into a versatile optimization formalism which enables dynamic constraint imposition. Because of the lack of the local optimization objective and high state dimensionality, the formulated problem is not analytically solvable. To address this, planning of the path set, which collects paths of DO feedback points, is proposed subsequently to offer feasible path and motion references for DO in constrained setups. Both theoretical analyses and computationally efficient algorithmic implementation of path set planning are discussed. Lastly, a control architecture combining path set tracking and constraint handling is designed for task execution. The effectiveness of our methods is validated in a variety of DOM tasks with constrained experimental settings.
Robotic skill learning has been increasingly studied but the demonstration collections are more challenging compared to collecting images/videos in computer vision and texts in natural language processing. This paper presents a skill learning paradigm by using intuitive teleoperation devices to generate high-quality human demonstrations efficiently for robotic skill learning in a data-driven manner. By using a reliable teleoperation interface, the da Vinci Research Kit (dVRK) master, a system called dVRK-Simulator-for-Demonstration (dS4D) is proposed in this paper. Various manipulation tasks show the system's effectiveness and advantages in efficiency compared to other interfaces. Using the collected data for policy learning has been investigated, which verifies the initial feasibility. We believe the proposed paradigm can facilitate robot learning driven by high-quality demonstrations and efficiency while generating them.
Needle picking is a challenging surgical task in robot-assisted surgery due to the characteristics of small slender shapes of needles, needles' variations in shapes and sizes, and demands for millimeter-level control. Prior works, heavily relying on the prior of needles (e.g., geometric models), are hard to scale to unseen needles' variations. In addition, visual tracking errors can not be minimized online using their approaches. In this paper, we propose an end-to-end deep visual learning framework for needle-picking tasks where both visual and control components can be learned jointly online. Our proposed framework integrates a state-of-the-art reinforcement learning framework, Dreamer, with behavior cloning (BC). Besides, two novel techniques, i.e., Virtual Clutch and Dynamic Spotlight Adaptation (DSA), are introduced to our end-to-end visual controller for needle-picking tasks. We conducted extensive experiments in simulation to evaluate the performance, robustness, variation adaptation, and effectiveness of individual components of our method. Our approach, trained by 8k demonstration timesteps and 140k online policy timesteps, can achieve a remarkable success rate of 80%, a new state-of-the-art with end-to-end vision-based surgical robot learning for delicate operations tasks. Furthermore, our method effectively demonstrated its superiority in generalization to unseen dynamic scenarios with needle variations and image disturbance, highlighting its robustness and versatility. Codes and videos are available at https://sites.google.com/view/dreamerbc.
Autonomous surgery has attracted increasing attention for revolutionizing robotic patient care, yet remains a distant and challenging goal. In this paper, we propose an image-based framework for high-precision autonomous suturing operation. We first build an algebraic geometric algorithm to achieve accurate needle pose estimation, then design the corresponding keypoint-based calibration network for joint-offset compensation, and further plan and control suture trajectory. Our solution ranked first among all competitors in the AccelNet Surgical Robotics Challenge. The source code is opened here to accelerate future autonomous surgery research.