Recent advances in multi-agent reinforcement learning (MARL) are enabling impressive coordination in heterogeneous multi-robot teams. However, existing approaches often overlook the challenge of generalizing learned policies to teams of new compositions, sizes, and robots. While such generalization might not be important in teams of virtual agents that can retrain policies on-demand, it is pivotal in multi-robot systems that are deployed in the real-world and must readily adapt to inevitable changes. As such, multi-robot policies must remain robust to team changes -- an ability we call adaptive teaming. In this work, we investigate if awareness and communication of robot capabilities can provide such generalization by conducting detailed experiments involving an established multi-robot test bed. We demonstrate that shared decentralized policies, that enable robots to be both aware of and communicate their capabilities, can achieve adaptive teaming by implicitly capturing the fundamental relationship between collective capabilities and effective coordination. Videos of trained policies can be viewed at: https://sites.google.com/view/cap-comm
Effective and fluent close-proximity human-robot interaction requires understanding how humans get habituated to robots and how robot motion affects human comfort. While prior work has identified humans' preferences over robot motion characteristics and studied their influence on comfort, we are yet to understand how novice first-time robot users get habituated to robots and how robot motion impacts the dynamics of comfort over repeated interactions. To take the first step towards such understanding, we carry out a user study to investigate the connections between robot motion and user comfort and habituation. Specifically, we study the influence of workspace overlap, end-effector speed, and robot motion legibility on overall comfort and its evolution over repeated interactions. Our analyses reveal that workspace overlap, in contrast to speed and legibility, has a significant impact on users' perceived comfort and habituation. In particular, lower workspace overlap leads to users reporting significantly higher overall comfort, lower variations in comfort, and fewer fluctuations in comfort levels during habituation.
Multi-agent reinforcement learning (MARL) has enjoyed significant recent progress, thanks to deep learning. This is naturally starting to benefit multi-robot systems (MRS) in the form of multi-robot RL (MRRL). However, existing infrastructure to train and evaluate policies predominantly focus on challenges in coordinating virtual agents, and ignore characteristics important to robotic systems. Few platforms support realistic robot dynamics, and fewer still can evaluate Sim2Real performance of learned behavior. To address these issues, we contribute MARBLER: Multi-Agent RL Benchmark and Learning Environment for the Robotarium. MARBLER offers a robust and comprehensive evaluation platform for MRRL by marrying Georgia Tech's Robotarium (which enables rapid prototyping on physical MRS) and OpenAI's Gym framework (which facilitates standardized use of modern learning algorithms). MARBLER offers a highly controllable environment with realistic dynamics, including barrier certificate-based obstacle avoidance. It allows anyone across the world to train and deploy MRRL algorithms on a physical testbed with reproducibility. Further, we introduce five novel scenarios inspired by common challenges in MRS and provide support for new custom scenarios. Finally, we use MARBLER to evaluate popular MARL algorithms and provide insights into their suitability for MRRL. In summary, MARBLER can be a valuable tool to the MRS research community by facilitating comprehensive and standardized evaluation of learning algorithms on realistic simulations and physical hardware. Links to our open-source framework and the videos of real-world experiments can be found at https://shubhlohiya.github.io/MARBLER/.
Task allocation can enable effective coordination of multi-robot teams to accomplish tasks that are intractable for individual robots. However, existing approaches to task allocation often assume that task requirements or reward functions are known and explicitly specified by the user. In this work, we consider the challenge of forming effective coalitions for a given heterogeneous multi-robot team when task reward functions are unknown. To this end, we first formulate a new class of problems, dubbed COncurrent Constrained Online optimization of Allocation (COCOA). The COCOA problem requires online optimization of coalitions such that the unknown rewards of all the tasks are simultaneously maximized using a given multi-robot team with constrained resources. To address the COCOA problem, we introduce an online optimization algorithm, named Concurrent Multi-Task Adaptive Bandits (CMTAB), that leverages and builds upon continuum-armed bandit algorithms. Experiments involving detailed numerical simulations and a simulated emergency response task reveal that CMTAB can effectively trade-off exploration and exploitation to simultaneously and efficiently optimize the unknown task rewards while respecting the team's resource constraints.
Effective close-proximity human-robot interaction (CP-HRI) requires robots to be able to both efficiently perform tasks as well as adapt to human behavior and preferences. However, this ability is mediated by many, sometimes competing, aspects of interaction. We propose a real-time motion-planning framework for robotic manipulators that can simultaneously optimize a set of both task- and human-centric cost functions. To this end, we formulate a Nonlinear Model-Predictive Control (NMPC) problem with kino-dynamic constraints and efficiently solve it by leveraging recent advances in nonlinear trajectory optimization. We employ stochastic predictions of the human partner's trajectories in order to adapt the robot's nominal behavior in anticipation of its human partner. Our framework explicitly models and allows balancing of different task- and human-centric cost functions. While previous approaches to trajectory optimization for CP-HRI take anywhere from several seconds to a full minute to compute a trajectory, our approach is capable of computing one in 318 ms on average, enabling real-time implementation. We illustrate the effectiveness of our framework by simultaneously optimizing for separation distance, end-effector visibility, legibility, smoothness, and deviation from nominal behavior. We also demonstrate that our approach performs comparably to prior work in terms of the chosen cost functions, while significantly improving computational efficiency.
Recent advances in learning-based approaches have led to impressive dexterous manipulation capabilities. Yet, we haven't witnessed widespread adoption of these capabilities beyond the laboratory. This is likely due to practical limitations, such as significant computational burden, inscrutable policy architectures, sensitivity to parameter initializations, and the considerable technical expertise required for implementation. In this work, we investigate the utility of Koopman operator theory in alleviating these limitations. Koopman operators are simple yet powerful control-theoretic structures that help represent complex nonlinear dynamics as linear systems in higher-dimensional spaces. Motivated by the fact that complex nonlinear dynamics underlie dexterous manipulation, we develop an imitation learning framework that leverages Koopman operators to simultaneously learn the desired behavior of both robot and object states. We demonstrate that a Koopman operator-based framework is surprisingly effective for dexterous manipulation and offers a number of unique benefits. First, the learning process is analytical, eliminating the sensitivity to parameter initializations and painstaking hyperparameter optimization. Second, the learned reference dynamics can be combined with a task-agnostic tracking controller such that task changes and variations can be handled with ease. Third, a Koopman operator-based approach can perform comparably to state-of-the-art imitation learning algorithms in terms of task success rate and imitation error, while being an order of magnitude more computationally efficient. In addition, we discuss a number of avenues for future research made available by this work.
Task allocation in heterogeneous multi-agent teams often requires reasoning about multi-dimensional agent traits (i.e., capabilities) and the demands placed on them by tasks. However, existing methods tend to ignore the fact that not all traits equally contribute to a given task. Ignoring such inherent preferences or relative importance can lead to unintended sub-optimal allocations of limited agent resources that do not necessarily contribute to task success. Further, reasoning over a large number of traits can incur a hefty computational burden. To alleviate these concerns, we propose an algorithm to infer task-specific trait preferences implicit in expert demonstrations. We leverage the insight that the consistency with which an expert allocates a trait to a task across demonstrations reflects the trait's importance to that task. Inspired by findings in psychology, we account for the fact that the inherent diversity of a trait in the dataset influences the dataset's informativeness and, thereby, the extent of the inferred preference or the lack thereof. Through detailed numerical simulations and evaluations of a publicly-available soccer dataset (FIFA 20), we demonstrate that we can successfully infer implicit trait preferences and that accounting for the inferred preferences leads to more computationally efficient and effective task allocation, compared to a baseline approach that treats all traits equally.
Existing learning approaches to dexterous manipulation use demonstrations or interactions with the environment to train black-box neural networks that provide little control over how the robot learns the skills or how it would perform post training. These approaches pose significant challenges when implemented on physical platforms given that, during initial stages of training, the robot's behavior could be erratic and potentially harmful to its own hardware, the environment, or any humans in the vicinity. A potential way to address these limitations is to add constraints during learning that restrict and guide the robot's behavior during training as well as roll outs. Inspired by the success of constrained approaches in other domains, we investigate the effects of adding position-based constraints to a 24-DOF robot hand learning to perform object relocation using Constrained Policy Optimization. We find that a simple geometric constraint can ensure the robot learns to move towards the object sooner than without constraints. Further, training with this constraint requires a similar number of samples as its unconstrained counterpart to master the skill. These findings shed light on how simple constraints can help robots achieve sensible and safe behavior quickly and ease concerns surrounding hardware deployment. We also investigate the effects of the strictness of these constraints and report findings that provide insights into how different degrees of strictness affect learning outcomes. Our code is available at https://github.com/GT-STAR-Lab/constrained-rl-dexterous-manipulation.
Complex, multi-objective missions require the coordination of heterogeneous robots at multiple inter-connected levels, such as coalition formation, scheduling, and motion planning. This challenge is exacerbated by dynamic changes, such as sensor and actuator failures, communication loss, and unexpected delays. We introduce Dynamic Iterative Task Allocation Graph Search (D-ITAGS) to \textit{simultaneously} address coalition formation, scheduling, and motion planning in dynamic settings involving heterogeneous teams. D-ITAGS achieves resilience via two key characteristics: i) interleaved execution, and ii) targeted repair. \textit{Interleaved execution} enables an effective search for solutions at each layer while avoiding incompatibility with other layers. \textit{Targeted repair} identifies and repairs parts of the existing solution impacted by a given disruption, while conserving the rest. In addition to algorithmic contributions, we provide theoretical insights into the inherent trade-off between time and resource optimality in these settings and derive meaningful bounds on schedule suboptimality. Our experiments reveal that i) D-ITAGS is significantly faster than recomputation from scratch in dynamic settings, with little to no loss in solution quality, and ii) the theoretical suboptimality bounds consistently hold in practice.