Humanoid robots have the potential to perform useful tasks in a world built for humans. However, communicating intention and teaming with a humanoid robot is a multi-faceted and complex problem. In this paper, we tackle the problems associated with quickly and interactively authoring new robot behavior that works on real hardware. We bring the powerful concepts of Affordance Templates and Coactive Design methodology to this problem to attempt to solve and explain it. In our approach we use interactive stance and hand pose goals along with other types of actions to author humanoid robot behavior on the fly. We then describe how our operator interface works to author behaviors on the fly and provide interdependence analysis charts for task approach and door opening. We present timings from real robot performances for traversing a push door and doing a pick and place task on our Nadia humanoid robot.
We present a virtual reality (VR) framework designed to intuitively generate humanoid multi-contact maneuvers for use in unstructured environments. Our framework allows the operator to directly manipulate the inverse kinematics objectives which parameterize a trajectory. Kinematic objectives consisting of spatial poses, center-of-mass position and joint positions are used in an optimization based inverse kinematics solver to compute whole-body configurations while enforcing static contact stability. Virtual ``anchors'' allow the operator to freely drag and constrain the robot as well as modify objective weights and constraint sets. The interface's design novelty is a generalized use of anchors which enables arbitrary posture and contact modes. The operator is aided by visual cues of actuation feasibility and tools for rapid anchor placement. We demonstrate our approach in simulation and hardware on a NASA Valkyrie humanoid, focusing on multi-contact trajectories which are challenging to generate autonomously or through alternative teleoperation approaches.
In trying to build humanoid robots that perform useful tasks in a world built for humans, we address the problem of autonomous locomotion. Humanoid robot planning and control algorithms for walking over rough terrain are becoming increasingly capable. At the same time, commercially available depth cameras have been getting more accurate and GPU computing has become a primary tool in AI research. In this paper, we present a newly constructed behavior control system for achieving fast, autonomous, bipedal walking, without pauses or deliberation. We achieve this using a recently published rapid planar regions perception algorithm, a height map based body path planner, an A* footstep planner, and a momentum-based walking controller. We put these elements together to form a behavior control system supported by modern software development practices and simulation tools.