Abstract:Successful human-robot collaboration depends on cohesive communication and a precise understanding of the robot's abilities, goals, and constraints. While robotic manipulators offer high precision, versatility, and productivity, they exhibit expressionless and monotonous motions that conceal the robot's intention, resulting in a lack of efficiency and transparency with humans. In this work, we use Laban notation, a dance annotation language, to enable robotic manipulators to generate trajectories with functional expressivity, where the robot uses nonverbal cues to communicate its abilities and the likelihood of succeeding at its task. We achieve this by introducing two novel variants of Hesitant expressive motion (Spoke-Like and Arc-Like). We also enhance the emotional expressivity of four existing emotive trajectories (Happy, Sad, Shy, and Angry) by augmenting Laban Effort usage with Laban Shape. The functionally expressive motions are validated via a human-subjects study, where participants equate both variants of Hesitant motion with reduced robot competency. The enhanced emotive trajectories are shown to be viewed as distinct emotions using the Valence-Arousal-Dominance (VAD) spectrum, corroborating the usage of Laban Shape.
Abstract:In collaborative tasks, autonomous agents fall short of humans in their capability to quickly adapt to new and unfamiliar teammates. We posit that a limiting factor for zero-shot coordination is the lack of shared task abstractions, a mechanism humans rely on to implicitly align with teammates. To address this gap, we introduce HA$^2$: Hierarchical Ad Hoc Agents, a framework leveraging hierarchical reinforcement learning to mimic the structured approach humans use in collaboration. We evaluate HA$^2$ in the Overcooked environment, demonstrating statistically significant improvement over existing baselines when paired with both unseen agents and humans, providing better resilience to environmental shifts, and outperforming all state-of-the-art methods.
Abstract:This paper introduces the Compliant Explicit Reference Governor (C-ERG), an extension of the Explicit Reference Governor that allows the robot to operate safely while in contact with the environment. The C-ERG is an intermediate layer that can be placed between a high-level planner and a low-level controller: its role is to enforce operational constraints and to enable the smooth transition between free-motion and contact operations. The C-ERG ensures safety by limiting the total energy available to the robotic arm at the time of contact. In the absence of contact, however, the C-ERG does not penalize the system performance. Numerical examples showcase the behavior of the C-ERG for increasingly complex systems.
Abstract:Estimating the location of contact is a primary function of artificial tactile sensing apparatuses that perceive the environment through touch. Existing contact localization methods use flat geometry and uniform sensor distributions as a simplifying assumption, limiting their ability to be used on 3D surfaces with variable density sensing arrays. This paper studies contact localization on an artificial skin embedded with mutual capacitance tactile sensors, arranged non-uniformly in an unknown distribution along a semi-conical 3D geometry. A fully connected neural network is trained to localize the touching points on the embedded tactile sensors. The studied online model achieves a localization error of $5.7 \pm 3.0$ mm. This research contributes a versatile tool and robust solution for contact localization that is ambiguous in shape and internal sensor distribution.
Abstract:Tactile sensing is used in robotics to obtain real-time feedback during physical interactions. Fine object manipulation is a robotic application that benefits from a high density of sensors to accurately estimate object pose, whereas a low sensing resolution is sufficient for collision detection. Introducing variable sensing resolution into a single tactile sensing array can increase the range of tactile use cases, but also invokes challenges in localizing internal sensor positions. In this work, we present a mutual capacitance sensor array with variable sensor density, VARSkin, along with a localization method that determines the position of each sensor in the non-uniform array. When tested on two distinct artificial skin patches with concealed sensor layouts, our method achieves a localization accuracy within $\pm 2mm$. We also provide a comprehensive error analysis, offering strategies for further precision improvement.
Abstract:Developing whole-body tactile skins for robots remains a challenging task, as existing solutions often prioritize modular, one-size-fits-all designs, which, while versatile, fail to account for the robot's specific shape and the unique demands of its operational context. In this work, we introduce the GenTact Toolbox, a computational pipeline for creating versatile whole-body tactile skins tailored to both robot shape and application domain. Our pipeline includes procedural mesh generation for conforming to a robot's topology, task-driven simulation to refine sensor distribution, and multi-material 3D printing for shape-agnostic fabrication. We validate our approach by creating and deploying six capacitive sensing skins on a Franka Research 3 robot arm in a human-robot interaction scenario. This work represents a shift from one-size-fits-all tactile sensors toward context-driven, highly adaptable designs that can be customized for a wide range of robotic systems and applications.
Abstract:This work explores non-prehensile manipulation (NPM) and whole-body interaction as strategies for enabling robotic manipulators to conduct manipulation tasks despite experiencing locked multi-joint (LMJ) failures. LMJs are critical system faults where two or more joints become inoperable; they impose constraints on the robot's configuration and control spaces, consequently limiting the capability and reach of a prehensile-only approach. This approach involves three components: i) modeling the failure-constrained workspace of the robot, ii) generating a kinodynamic map of NPM actions within this workspace, and iii) a manipulation action planner that uses a sim-in-the-loop approach to select the best actions to take from the kinodynamic map. The experimental evaluation shows that our approach can increase the failure-constrained reachable area in LMJ cases by 79%. Further, it demonstrates the ability to complete real-world manipulation with up to 88.9% success when the end-effector is unusable and up to 100% success when it is usable.
Abstract:Motion planning for articulated robots has traditionally been governed by algorithms that operate within manufacturer-defined payload limits. Our empirical analysis of the Franka Emika Panda robot demonstrates that this approach unnecessarily restricts the robot's dynamically-reachable task space. These results establish an expanded operational envelope for such robots, showing that they can handle payloads of more than twice their rated capacity. Additionally, our preliminary findings indicate that integrating non-prehensile motion primitives with grasping-based manipulation has the potential to further increase the success rates of manipulation tasks involving payloads exceeding nominal limits.
Abstract:In this work, we present a novel algorithm to perform spill-free handling of open-top liquid-filled containers that operates in cluttered environments. By allowing liquid-filled containers to be tilted at higher angles and enabling motion along all axes of end-effector orientation, our work extends the reachable space and enhances maneuverability around obstacles, broadening the range of feasible scenarios. Our key contributions include: i) generating spill-free paths through the use of RRT* with an informed sampler that leverages container properties to avoid spill-inducing states (such as an upside-down container), ii) parameterizing the resulting path to generate spill-free trajectories through the implementation of a time parameterization algorithm, coupled with a transformer-based machine-learning model capable of classifying trajectories as spill-free or not. We validate our approach in real-world, obstacle-rich task settings using containers of various shapes and fill levels and demonstrate an extended solution space that is at least 3x larger than an existing approach.
Abstract:In this work, we introduce LazyBoE, a multi-query method for kinodynamic motion planning with forward propagation. This algorithm allows for the simultaneous exploration of a robot's state and control spaces, thereby enabling a wider suite of dynamic tasks in real-world applications. Our contributions are three-fold: i) a method for discretizing the state and control spaces to amortize planning times across multiple queries; ii) lazy approaches to collision checking and propagation of control sequences that decrease the cost of physics-based simulation; and iii) LazyBoE, a robust kinodynamic planner that leverages these two contributions to produce dynamically-feasible trajectories. The proposed framework not only reduces planning time but also increases success rate in comparison to previous approaches.