This paper presents a theory of optimization fabrics, second-order differential equations that encode nominal behaviors on a space and can be used to define the behavior of a smooth optimizer. Optimization fabrics can encode commonalities among optimization problems that reflect the structure of the space itself, enabling smooth optimization processes to intelligently navigate each problem even when optimizing simple naive potential functions. Importantly, optimization over a fabric is inherently asymptotically stable. The majority of this paper is dedicated to the development of a tool set for the design and use of a broad class of fabrics called geometric fabrics. Geometric fabrics encode behavior as general nonlinear geometries which are covariant second-order differential equations with a special homogeneity property that ensures their behavior is independent of the system's speed through the medium. A class of Finsler Lagrangian energies can be used to both define how these nonlinear geometries combine with one another and how they react when potential functions force them from their nominal paths. Furthermore, these geometric fabrics are closed under the standard operations of pullback and combination on a transform tree. For behavior representation, this class of geometric fabrics constitutes a broad class of spectral semi-sprays (specs), also known as Riemannian Motion Policies (RMPs) in the context of robotic motion generation, that captures both the intuitive separation between acceleration policy and priority metric critical for modular design and are inherently stable. Therefore, geometric fabrics are safe and easier to use by less experienced behavioral designers. Application of this theory to policy representation and generalization in learning are discussed as well.
High-density afferents in the human hand have long been regarded as essential for human grasping and manipulation abilities. In contrast, robotic tactile sensors are typically used to provide low-density contact data, such as center-of-pressure and resultant force. Although useful, this data does not exploit the rich information content that some tactile sensors (e.g., the SynTouch BioTac) naturally provide. This research extends robotic tactile sensing beyond reduced-order models through 1) the automated creation of a precise tactile dataset for the BioTac over diverse physical interactions, 2) a 3D finite element (FE) model of the BioTac, which complements the experimental dataset with high-resolution, distributed contact data, and 3) neural-network-based mappings from raw BioTac signals to low-dimensional experimental data, and more importantly, high-density FE deformation fields. These data streams can provide a far greater quantity of interpretable information for grasping and manipulation algorithms than previously accessible.
Tracking the pose of an object while it is being held and manipulated by a robot hand is difficult for vision-based methods due to significant occlusions. Prior works have explored using contact feedback and particle filters to localize in-hand objects. However, they have mostly focused on the static grasp setting and not when the object is in motion, as doing so requires modeling of complex contact dynamics. In this work, we propose using GPU-accelerated parallel robot simulations and derivative-free, sample-based optimizers to track in-hand object poses with contact feedback during manipulation. We use physics simulation as the forward model for robot-object interactions, and the algorithm jointly optimizes for the state and the parameters of the simulations, so they better match with those of the real world. Our method runs in real-time (30Hz) on a single GPU, and it achieves an average point cloud distance error of 6mm in simulation experiments and 13mm in the real-world ones. View experiment videos at https://sites.google.com/view/in-hand-object-pose-tracking/
Teleoperation offers the possibility of imparting robotic systems with sophisticated reasoning skills, intuition, and creativity to perform tasks. However, current teleoperation solutions for high degree-of-actuation (DoA), multi-fingered robots are generally cost-prohibitive, while low-cost offerings usually provide reduced degrees of control. Herein, a low-cost, vision based teleoperation system, DexPilot, was developed that allows for complete control over the full 23 DoA robotic system by merely observing the bare human hand. DexPilot enables operators to carry out a variety of complex manipulation tasks that go beyond simple pick-and-place operations. This allows for collection of high dimensional, multi-modality, state-action data that can be leveraged in the future to learn sensorimotor policies for challenging manipulation tasks. The system performance was measured through speed and reliability metrics across two human demonstrators on a variety of tasks. The videos of the experiments can be found at https://sites.google.com/view/dex-pilot.
The advancement of simulation-assisted robot programming, automation of high-tolerance assembly operations, and improvement of real-world performance engender a need for positionally accurate robots. Despite tight machining tolerances, good mechanical design, and careful assembly, robotic arms typically exhibit average Cartesian positioning errors of several millimeters. Fortunately, the vast majority of this error can be removed in software by proper calibration of the so-called "zero-offsets" of a robot's joints. This research developed an automated, inexpensive, highly portable, in situ calibration method that fine tunes these kinematic parameters, thereby, improving a robot's average positioning accuracy four-fold throughout its workspace. In particular, a prospective low-cost motion capture system and a benchmark laser tracker were used as reference sensors for robot calibration. Bayesian inference produced optimized zero-offset parameters alongside their uncertainty for data from both reference sensors. Relative and absolute accuracy metrics were proposed and applied for quantifying robot positioning accuracy. Uncertainty analysis of a validated, probabilistic robot model quantified the absolute positioning accuracy throughout its entire workspace. Altogether, three measures of accuracy conclusively revealed multi-fold improvement in the positioning accuracy of the robotic arm. Bayesian inference on motion capture data yielded zero-offsets and accuracy calculations comparable to those derived from laser tracker data, ultimately proving this method's viability towards robot calibration.
The existence of tactile afferents sensitive to slip-related mechanical transients in the human hand augments the robustness of grasping through secondary force modulation protocols. Despite this knowledge and the fact that tactile-based slip detection has been researched for decades, robust slip detection is still not an out-of-the-box capability for any commercially available tactile sensor. This research seeks to bridge this gap with a comprehensive study addressing several aspects of slip detection. Key developments include a systematic data collection process yielding millions of sensory data points, the generalized conversion of multivariate-to-univariate sensor output, an insightful spectral analysis of the univariate sensor outputs, and the application of Long Short-Term Memory (LSTM) neural networks on the univariate signals to produce robust slip detectors from three commercially available sensors capable of tactile sensing. The sensing elements underlying these sensors vary in quantity, spatial arrangement, and mechanics, leveraging principles in electro-mechanical resistance, optics, and hydro-acoustics. Critically, slip detection performance of the tactile technologies is quantified through a measurement methodology that unveils the effects of data window size, sampling rate, material type, slip speed, and sensor manufacturing variability. Results indicate that the investigated commercial tactile sensors are inherently capable of high-quality slip detection.
This technical report presents an introduction to different aspects of multi-fingered robot grasping. After having introduced relevant mathematical background for modeling, form and force closure are discussed. Next, we present an overview of various grasp planning algorithms with the objective of illustrating different approaches to solve this problem. Finally, we discuss grasp performance benchmarking.