This paper considers centralized mission-planning for a heterogeneous multi-agent system with the aim of locating a hidden target. We propose a mixed observable setting, consisting of a fully observable state-space and a partially observable environment, using a hidden Markov model. First, we construct rapidly exploring random trees (RRTs) to introduce the mixed observable RRT for finding plausible mission plans giving way-points for each agent. Leveraging this construction, we present a path-selection strategy based on a dynamic programming approach, which accounts for the uncertainty from partial observations and minimizes the expected cost. Finally, we combine the high-level plan with model predictive controllers to evaluate the approach on an experimental setup consisting of a quadruped robot and a drone. It is shown that agents are able to make intelligent decisions to explore the area efficiently and to locate the target through collaborative actions.
This paper presents a framework for the safety-critical control of robotic systems, when safety is defined on safe regions in the configuration space. To maintain safety, we synthesize a safe velocity based on control barrier function theory without relying on a -- potentially complicated -- high-fidelity dynamical model of the robot. Then, we track the safe velocity with a tracking controller. This culminates in model-free safety critical control. We prove theoretical safety guarantees for the proposed method. Finally, we demonstrate that this approach is application-agnostic. We execute an obstacle avoidance task with a Segway in high-fidelity simulation, as well as with a Drone and a Quadruped in hardware experiments.
Motion planning for autonomous robots and vehicles in presence of uncontrolled agents remains a challenging problem as the reactive behaviors of the uncontrolled agents must be considered. Since the uncontrolled agents usually demonstrate multimodal reactive behavior, the motion planner needs to solve a continuous motion planning problem under these behaviors, which contains a discrete element. We propose a branch Model Predictive Control (MPC) framework that plans over feedback policies to leverage the reactive behavior of the uncontrolled agent. In particular, a scenario tree is constructed from a finite set of policies of the uncontrolled agent, and the branch MPC solves for a feedback policy in the form of a trajectory tree, which shares the same topology as the scenario tree. Moreover, coherent risk measures such as the Conditional Value at Risk (CVaR) are used as a tuning knob to adjust the tradeoff between performance and robustness. The proposed branch MPC framework is tested on an overtake and lane change task and a merging task for autonomous vehicles in simulation, and on the motion planning of an autonomous quadruped robot alongside an uncontrolled quadruped in experiments. The result demonstrates interesting human-like behaviors, achieving a balance between safety and performance.
Generating provably stable walking gaits that yield natural locomotion when executed on robotic-assistive devices is a challenging task that often requires hand-tuning by domain experts. This paper presents an alternative methodology, where we propose the addition of musculoskeletal models directly into the gait generation process to intuitively shape the resulting behavior. In particular, we construct a multi-domain hybrid system model that combines the system dynamics with muscle models to represent natural multicontact walking. Stable walking gaits can then be formally generated for this model via the hybrid zero dynamics method. We experimentally apply our framework towards achieving multicontact locomotion on a dual-actuated transfemoral prosthesis, AMPRO3. The results demonstrate that enforcing feasible muscle dynamics produces gaits that yield natural locomotion (as analyzed via electromyography), without the need for extensive manual tuning. Moreover, these gaits yield similar behavior to expert-tuned gaits. We conclude that the novel approach of combining robotic walking methods (specifically HZD) with muscle models successfully generates anthropomorphic robotic-assisted locomotion.
A large class of decision making under uncertainty problems can be described via Markov decision processes (MDPs) or partially observable MDPs (POMDPs), with application to artificial intelligence and operations research, among others. Traditionally, policy synthesis techniques are proposed such that a total expected cost or reward is minimized or maximized. However, optimality in the total expected cost sense is only reasonable if system behavior in the large number of runs is of interest, which has limited the use of such policies in practical mission-critical scenarios, wherein large deviations from the expected behavior may lead to mission failure. In this paper, we consider the problem of designing policies for MDPs and POMDPs with objectives and constraints in terms of dynamic coherent risk measures, which we refer to as the constrained risk-averse problem. For MDPs, we reformulate the problem into a infsup problem via the Lagrangian framework and propose an optimization-based method to synthesize Markovian policies. For MDPs, we demonstrate that the formulated optimization problems are in the form of difference convex programs (DCPs) and can be solved by the disciplined convex-concave programming (DCCP) framework. We show that these results generalize linear programs for constrained MDPs with total discounted expected costs and constraints. For POMDPs, we show that, if the coherent risk measures can be defined as a Markov risk transition mapping, an infinite-dimensional optimization can be used to design Markovian belief-based policies. For stochastic finite-state controllers (FSCs), we show that the latter optimization simplifies to a (finite-dimensional) DCP and can be solved by the DCCP framework. We incorporate these DCPs in a policy iteration algorithm to design risk-averse FSCs for POMDPs.
The ability to realize nonlinear controllers with formal guarantees on dynamic robotic systems has the potential to enable more complex robotic behaviors -- yet, realizing these controllers is often practically challenging. To address this challenge, this paper presents the end-to-end realization of dynamic bipedal locomotion on an underactuated bipedal robot via hybrid zero dynamics and control Lyapunov functions. A compliant model of Cassie is represented as a hybrid system to set the stage for a trajectory optimization framework. With the goal of achieving a variety of walking speeds in all directions, a library of compliant walking motions is compiled and then parameterized for efficient use within real-time controllers. Control Lyapunov functions, which have strong theoretic guarantees, are synthesized to leverage the gait library and coupled with inverse dynamics to obtain optimization-based controllers framed as quadratic programs. It is proven that this controller provably achieves stable locomotion; this is coupled with a theoretic analysis demonstrating useful properties of the controller for tuning and implementation. The proposed theoretic framework is practically demonstrated on the Cassie robot, wherein 3D walking is achieved through the use of optimization-based torque control. The experiments highlight robotic walking at different speeds and terrains, illustrating the end-to-end realization of theoretically justified nonlinear controllers on dynamic underactuated robotic systems.
Functional autonomous systems often realize complex tasks by utilizing state machines comprised of discrete primitive behaviors and transitions between these behaviors. This architecture has been widely studied in the context of quasi-static and dynamics-independent systems. However, applications of this concept to dynamical systems are relatively sparse, despite extensive research on individual dynamic primitive behaviors, which we refer to as "motion primitives." This paper formalizes a process to determine dynamic-state aware conditions for transitions between motion primitives in the context of safety. The result is framed as a "motion primitive graph" that can be traversed by standard graph search and planning algorithms to realize functional autonomy. To demonstrate this framework, dynamic motion primitives -- including standing up, walking, and jumping -- and the transitions between these behaviors are experimentally realized on a quadrupedal robot.
Lower-limb prosthesis wearers are more prone to fall than non-amputees. Powered prosthesis can reduce this instability of passive prostheses. While shown to be more stable in practice, powered prostheses generally use model-independent control methods that lack formal guarantees of stability and rely on heuristic tuning. Recent work overcame one of the limitations of model-based prosthesis control by developing a class of stable prosthesis subsystem controllers independent of the human model, except for its interaction forces with the prosthesis. Force sensors to measure these socket interaction forces as well as the ground reaction forces (GRFs) could introduce noise into the control loop making hardware implementation infeasible. This paper addresses part of this limitation by obtaining some of the GRFs through an insole pressure sensor. This paper achieves the first model-dependent prosthesis controller that uses in-the-loop on-board real-time force sensing, resulting in stable human-prosthesis walking and increasing the validity of our formal guarantees of stability.
This paper combines episodic learning and control barrier functions in the setting of bipedal locomotion. The safety guarantees that control barrier functions provide are only valid with perfect model knowledge; however, this assumption cannot be met on hardware platforms. To address this, we utilize the notion of projection-to-state safety paired with a machine learning framework in an attempt to learn the model uncertainty as it affects the barrier functions. The proposed approach is demonstrated both in simulation and on hardware for the AMBER-3M bipedal robot in the context of the stepping-stone problem, which requires precise foot placement while walking dynamically.
We consider the stochastic shortest path planning problem in MDPs, i.e., the problem of designing policies that ensure reaching a goal state from a given initial state with minimum accrued cost. In order to account for rare but important realizations of the system, we consider a nested dynamic coherent risk total cost functional rather than the conventional risk-neutral total expected cost. Under some assumptions, we show that optimal, stationary, Markovian policies exist and can be found via a special Bellman's equation. We propose a computational technique based on difference convex programs (DCPs) to find the associated value functions and therefore the risk-averse policies. A rover navigation MDP is used to illustrate the proposed methodology with conditional-value-at-risk (CVaR) and entropic-value-at-risk (EVaR) coherent risk measures.