Constrained motion planning is a challenging field of research, aiming for computationally efficient methods that can find a collision-free path connecting a given start and goal by transversing zero-volume constraint manifolds for a given planning problem. These planning problems come up surprisingly frequently, such as in robot manipulation for performing daily life assistive tasks. However, few solutions to constrained motion planning are available, and those that exist struggle with high computational time complexity in finding a path solution on the manifolds. To address this challenge, we present Constrained Motion Planning Networks X (CoMPNetX). It is a neural planning approach, comprising a conditional deep neural generator and discriminator with neural gradients-based fast projections to the constraint manifolds. We also introduce neural task and scene representations conditioned on which the CoMPNetX generates implicit manifold configurations to turbo-charge any underlying classical planner such as Sampling-based Motion Planning methods for quickly solving complex constrained planning tasks. We show that our method, equipped with any constrained-adherence technique, finds path solutions with high success rates and lower computation times than state-of-the-art traditional path-finding tools on various challenging scenarios.
Autonomous robotic surgery has seen significant progression over the last decade with the aims of reducing surgeon fatigue, improving procedural consistency, and perhaps one day take over surgery itself. However, automation has not been applied to the critical surgical task of controlling tissue and blood vessel bleeding--known as hemostasis. The task of hemostasis covers a spectrum of bleeding sources and a range of blood velocity, trajectory, and volume. In an extreme case, an un-controlled blood vessel fills the surgical field with flowing blood. In this work, we present the first, automated solution for hemostasis through development of a novel probabilistic blood flow detection algorithm and a trajectory generation technique that guides autonomous suction tools towards pooling blood. The blood flow detection algorithm is tested in both simulated scenes and in a real-life trauma scenario involving a hemorrhage that occurred during thyroidectomy. The complete solution is tested in a physical lab setting with the da Vinci Research Kit (dVRK) and a simulated surgical cavity for blood to flow through. The results show that our automated solution has accurate detection, a fast reaction time, and effective removal of the flowing blood. Therefore, the proposed methods are powerful tools to clearing the surgical field which can be followed by either a surgeon or future robotic automation developments to close the vessel rupture.
Reliable real-time planning for robots is essential in today's rapidly expanding automated ecosystem. In such environments, traditional methods that plan by relaxing constraints become unreliable or slow-down for kinematically constrained robots. This paper describes the algorithm Dynamic Motion Planning Networks (Dynamic MPNet), an extension to Motion Planning Networks, for non-holonomic robots that address the challenge of real-time motion planning using a neural planning approach. We propose modifications to the training and planning networks that make it possible for real-time planning while improving the data efficiency of training and trained models' generalizability. We evaluate our model in simulation for planning tasks for a non-holonomic robot. We also demonstrate experimental results for an indoor navigation task using a Dubins car.
The presence of task constraints imposes a significant challenge to motion planning. Despite all recent advancements, existing algorithms are still computationally expensive for most planning problems. In this paper, we present Constrained Motion Planning Networks (CoMPNet), the first neural planner for multimodal kinematic constraints. Our approach comprises the following components: i) constraint and environment perception encoders; ii) neural robot configuration generator that outputs configurations on/near the constraint manifold(s), and iii) a bidirectional planning algorithm that takes the generated configurations to create a feasible robot motion trajectory. We show that CoMPNet solves practical motion planning tasks involving both unconstrained and constrained problems. Furthermore, it generalizes to new unseen locations of the objects, i.e., not seen during training, in the given environments with high success rates. When compared to the state-of-the-art constrained motion planning algorithms, CoMPNet outperforms by order of magnitude improvement in computational speed with a significantly lower variance.
Evaluating distance to collision for robot manipulators is useful for assessing the feasibility of a robot configuration or for defining safe robot motion in unpredictable environments. However, distance estimation is a timeconsuming operation, and the sensors involved in measuring the distance are always noisy. A challenge thus exists in evaluating the expected distance to collision for safer robot control and planning. In this work, we propose the use of Gaussian process (GP) regression and the forward kinematics (FK) kernel (a similarity function for robot manipulators) to efficiently and accurately estimate distance to collision. We show that the GP model with the FK kernel achieves 70 times faster distance evaluations compared to a standard geometric technique, and up to 13 times more accurate evaluations compared to other regression models, even when the GP is trained on noisy distance measurements. We employ this technique in trajectory optimization tasks and observe 9 times faster optimization than with the noise-free geometric approach yet obtain similar optimized motion plans. We also propose a confidence-based hybrid model that uses model-based predictions in regions of high confidence and switches to a more expensive sensor-based approach in other areas, and we demonstrate the usefulness of this hybrid model in an application involving reaching into a narrow passage.
Robotic automation in surgery requires precise tracking of surgical tools and mapping of deformable tissue. Previous works on surgical perception frameworks require significant effort in developing features for surgical tool and tissue tracking. In this work, we overcome the challenge by exploiting deep learning methods for surgical perception. We integrated deep neural networks, capable of efficient feature extraction, into the tissue reconstruction and instrument pose estimation processes. By leveraging transfer learning, the deep learning based approach requires minimal training data and reduced feature engineering efforts to fully perceive a surgical scene. The framework was tested on three publicly available datasets, which use the da Vinci Surgical System, for comprehensive analysis. Experimental results show that our framework achieves state-of-the-art tracking performance in a surgical environment by utilizing deep learning for feature extraction.
This paper presents the design and performance of a screw-propelled redundant serpentine robot. This robot comprises serially linked, identical modules, each incorporating an Archimedes' screw for propulsion and a universal joint (U-Joint) for orientation control. When serially chained, these modules form a versatile snake robot platform which enables the robot to reshape its body configuration for varying environments and gait patterns that would be typical of snake movement. Furthermore, the Archimedes' screws allow for novel omni-wheel drive-like motions by speed controlling their screw threads. This paper considers the mechanical and electrical design, as well as the software architecture for realizing a fully integrated system. The system includes 3$N$ actuators for $N$ segments, each controlled using a BeagleBone Black with a customized power-electronics cape, a 9 Degrees of Freedom (DoF) Inertial Measurement Unit (IMU), and a scalable communication channel over ROS. The intended application for this robot is its use as an instrumentation mobility platform on terrestrial planets where the terrain may involve vents, caves, ice, and rocky surfaces. Additional experiments are shown on our website.
Kernel functions may be used in robotics for comparing different poses of a robot, such as in collision checking, inverse kinematics, and motion planning. These comparisons provide distance metrics often based on joint space measurements and are performed hundreds or thousands of times a second, continuously for changing environments. Few examples exist in creating new kernels, despite their significant effect on computational performance and robustness in robot control and planning. We introduce a new kernel function based on forward kinematics (FK) to compare robot manipulator configurations. We integrate our new FK kernel into our proxy collision checker, Fastron, that previously showed significant speed improvements to collision checking and motion planning. With the new FK kernel, we realize a two-fold speedup in proxy collision check speed, 8 times less memory, and a boost in classification accuracy from 75% to over 95% for a 7 degrees-of-freedom robot arm compared to the previously-used radial basis function kernel. Compared to state-of-the-art geometric collision checkers, with the FK kernel, collision checks are now 9 times faster. To show the broadness of the approach, we apply Fastron FK in OMPL across a wide variety of motion planners, showing unanimously faster robot planning.
Interventional Radiology (IR) enables earlier diagnosis and less invasive treatment of numerous ailments. Here we present our ongoing development of CRANE: CT RoboticArm and Needle Emplacer, a robotic needle positioning system for CT guided procedures. The robot has 8 active Degrees-of-Freedom (DoF) and a novel infinite travel needle insertion mechanism. The control system is distributed using the RobotOperating System (ROS) across a low latency network that interconnects a real-time low-jitter controller, with a desktop computer which hosts the User Interface (UI) and high-level control. This platform can serve to evaluate limitations in the current procedures and to prototype potential solutions to these challenges in-situ.