Abstract:Object transport tasks are fundamental in robotic automation, emphasizing the importance of efficient and secure methods for moving objects. Non-prehensile transport can significantly improve transport efficiency, as it enables handling multiple objects simultaneously and accommodating objects unsuitable for parallel-jaw or suction grasps. Existing approaches incorporate constraints based on the Coulomb friction model, which is imprecise during fast motions where inherent mechanical vibrations occur. Imprecise constraints can cause transported objects to slide or even fall off the tray. To address this limitation, we propose a novel method to learn a friction model using acoustic sensing that maps a tray's motion profile to a dynamically conditioned friction coefficient. This learned model enables an optimization-based motion planner to adjust the friction constraint at each control step according to the planned motion at that step. In experiments, we generate time-optimized trajectories for a UR5e robot to transport various objects with constraints using both the standard Coulomb friction model and the learned friction model. Results suggest that the learned friction model reduces object displacement by up to 86.0% compared to the baseline, highlighting the effectiveness of acoustic sensing in learning real-world friction constraints.
Abstract:Many bioinspired robots mimic the rigid articulated joint structure of the human hand for grasping tasks, but experience high-frequency mechanical perturbations that can destabilize the system and negatively affect precision without a high-frequency controller. Despite having bandwidth-limited controllers that experience time delays between sensing and actuation, biological systems can respond successfully to and mitigate these high-frequency perturbations. Human joints include damping and stiffness that many rigid articulated bioinspired hand robots lack. To enable researchers to explore the effects of joint viscoelasticity in joint control, we developed a human-hand-inspired grasping robot with viscoelastic structures that utilizes accessible and bioderived materials to reduce the economic and environmental impact of prototyping novel robotic systems. We demonstrate that an elastic element at the finger joints is necessary to achieve concurrent flexion, which enables secure grasping of spherical objects. To significantly damp the manufactured finger joints, we modeled, manufactured, and characterized rotary dampers using peanut butter as an organic analog joint working fluid. Finally, we demonstrated that a real-time position-based controller could be used to successfully catch a lightweight falling ball. We developed this open-source, low-cost grasping platform that abstracts the morphological and mechanical properties of the human hand to enable researchers to explore questions about biomechanics in roboto that would otherwise be difficult to test in simulation or modeling.
Abstract:Transparent objects are ubiquitous in industry, pharmaceuticals, and households. Grasping and manipulating these objects is a significant challenge for robots. Existing methods have difficulty reconstructing complete depth maps for challenging transparent objects, leaving holes in the depth reconstruction. Recent work has shown neural radiance fields (NeRFs) work well for depth perception in scenes with transparent objects, and these depth maps can be used to grasp transparent objects with high accuracy. NeRF-based depth reconstruction can still struggle with especially challenging transparent objects and lighting conditions. In this work, we propose Residual-NeRF, a method to improve depth perception and training speed for transparent objects. Robots often operate in the same area, such as a kitchen. By first learning a background NeRF of the scene without transparent objects to be manipulated, we reduce the ambiguity faced by learning the changes with the new object. We propose training two additional networks: a residual NeRF learns to infer residual RGB values and densities, and a Mixnet learns how to combine background and residual NeRFs. We contribute synthetic and real experiments that suggest Residual-NeRF improves depth perception of transparent objects. The results on synthetic data suggest Residual-NeRF outperforms the baselines with a 46.1% lower RMSE and a 29.5% lower MAE. Real-world qualitative experiments suggest Residual-NeRF leads to more robust depth maps with less noise and fewer holes. Website: https://residual-nerf.github.io