This paper describes TOBY, a visualization tool that helps a user explore the contents of an academic survey paper. The visualization consists of four components: a hierarchical view of taxonomic data in the survey, a document similarity view in the space of taxonomic classes, a network view of citations, and a new paper recommendation tool. In this paper, we will discuss these features in the context of three separate deployments of the tool.
Language-capable robots hold unique persuasive power over humans, and thus can help regulate people's behavior and preserve a better moral ecosystem, by rejecting unethical commands and calling out norm violations. However, miscalibrated norm violation responses (when the harshness of a response does not match the actual norm violation severity) may not only decrease the effectiveness of human-robot communication, but may also damage the rapport between humans and robots. Therefore, when robots respond to norm violations, it is crucial that they consider both the moral value of their response (by considering how much positive moral influence their response could exert) and the social value (by considering how much face threat might be imposed by their utterance). In this paper, we present a simple (naive) mathematical model of proportionality which could explain how moral and social considerations should be balanced in multi-agent norm violation response generation. But even more importantly, we use this model to start a discussion about the hidden complexity of modeling proportionality, and use this discussion to identify key research directions that must be explored in order to develop socially and morally competent language-capable robots.
* Proceedings of the AI-HRI Symposium at AAAI Fall Symposium Series
(FSS) 2022 * 5 pages, the AI-HRI Symposium at AAAI Fall Symposium Series (FSS)
As robots become increasingly complex, they must explain their behaviors to gain trust and acceptance. However, it may be difficult through verbal explanation alone to fully convey information about past behavior, especially regarding objects no longer present due to robots' or humans' actions. Humans often try to physically mimic past movements to accompany verbal explanations. Inspired by this human-human interaction, we describe the technical implementation of a system for past behavior replay for robots in this tool paper. Specifically, we used Behavior Trees to encode and separate robot behaviors, and schemaless MongoDB to structurally store and query the underlying sensor data and joint control messages for future replay. Our approach generalizes to different types of replays, including both manipulation and navigation replay, and visual (i.e., augmented reality (AR)) and auditory replay. Additionally, we briefly summarize a user study to further provide empirical evidence of its effectiveness and efficiency. Sample code and instructions are available on GitHub at https://github.com/umhan35/robot-behavior-replay.
* Proceedings of the AI-HRI Symposium at AAAI Fall Symposium Series
(FSS) 2022 * 6 pages, 5 figures, the AI-HRI Symposium at AAAI Fall Symposium
Series (FSS) 2022
Autonomous robots must communicate about their decisions to gain trust and acceptance. When doing so, robots must determine which actions are causal, i.e., which directly give rise to the desired outcome, so that these actions can be included in explanations. In behavior learning in psychology, this sort of reasoning during an action sequence has been studied extensively in the context of imitation learning. And yet, these techniques and empirical insights are rarely applied to human-robot interaction (HRI). In this work, we discuss the relevance of behavior learning insights for robot intent communication, and present the first application of these insights for a robot to efficiently communicate its intent by selectively explaining the causal actions in an action sequence.
* Appears in Proceedings of the AAAI SSS-22 Symposium "Closing the
Assessment Loop: Communicating Proficiency and Intent in Human-Robot
Teaming". Paper webpage: https://zhaohanphd.com/aaai-sss22/
Within the human-robot interaction (HRI) community, many researchers have focused on the careful design of human-subjects studies. However, other parts of the community, e.g., the technical advances community, also need to do human-subjects studies to collect data to train their models, in ways that require user studies but without a strict experimental design. The design of such data collection is an underexplored area worthy of more attention. In this work, we contribute a clearly defined process to collect data with three steps for machine learning modeling purposes, grounded in recent literature, and detail an use of this process to facilitate the collection of a corpus of referring expressions. Specifically, we discuss our data collection goal and how we worked to encourage well-covered and abundant participant responses, through our design of the task environment, the task itself, and the study procedure. We hope this work would lead to more data collection formalism efforts in the HRI community and a fruitful discussion during the workshop.
Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI) has been gaining considerable attention in research in recent years. However, the HRI community lacks a set of shared terminology and framework for characterizing aspects of mixed reality interfaces, presenting serious problems for future research. Therefore, it is important to have a common set of terms and concepts that can be used to precisely describe and organize the diverse array of work being done within the field. In this paper, we present a novel taxonomic framework for different types of VAM-HRI interfaces, composed of four main categories of virtual design elements (VDEs). We present and justify our taxonomy and explain how its elements have been developed over the last 30 years as well as the current directions VAM-HRI is headed in the coming decade.
For mobile robots, mobile manipulators, and autonomous vehicles to safely navigate around populous places such as streets and warehouses, human observers must be able to understand their navigation intent. One way to enable such understanding is by visualizing this intent through projections onto the surrounding environment. But despite the demonstrated effectiveness of such projections, no open codebase with an integrated hardware setup exists. In this work, we detail the empirical evidence for the effectiveness of such directional projections, and share a robot-agnostic implementation of such projections, coded in C++ using the widely-used Robot Operating System (ROS) and rviz. Additionally, we demonstrate a hardware configuration for deploying this software, using a Fetch robot, and briefly summarize a full-scale user study that motivates this configuration. The code, configuration files (roslaunch and rviz files), and documentation are freely available on GitHub at https://github.com/umhan35/arrow_projection.
* 6 pages, 4 figures, accepted to 2022 ACM/IEEE International
Conference on Human-Robot Interaction (HRI), Short Contributions
Social Justice oriented Engineering Education frameworks have been developed to help guide engineering students' decisions about which projects will genuinely address human needs to create a better and more equitable society. In this paper, we explore the role such theories might play in the field of AI-HRI, consider the extent to which our community is (or is not) aligned with these recommendations, and envision a future in which our research community takes guidance from these theories. In particular, we analyze recent AI-HRI (through analysis of 2020 AI-HRI papers) and consider possible futures of AI-HRI (through a speculative ethics exercise). Both activities are guided through the lens of the Engineering for Social Justice (E4SJ) framework, which centers contextual listening and enhancement of human capabilities. Our analysis suggests that current AI-HRI research is not well aligned with the guiding principles of Engineering for Social Justice, and as such, does not obviously meet the needs of the communities we could be helping most. As such, we suggest that motivating future work through the E4SJ framework could help to ensure that we as researchers are developing technologies that will actually lead to a more equitable world.
* Presented at AI-HRI symposium as part of AAAI-FSS 2021
Due to their unique persuasive power, language-capable robots must be able to both act in line with human moral norms and clearly and appropriately communicate those norms. These requirements are complicated by the possibility that humans may ascribe blame differently to humans and robots. In this work, we explore how robots should communicate in moral advising scenarios, in which the norms they are expected to follow (in a moral dilemma scenario) may be different from those their advisees are expected to follow. Our results suggest that, in fact, both humans and robots are judged more positively when they provide the advice that favors the common good over an individual's life. These results raise critical new questions regarding people's moral responses to robots and the design of autonomous moral agents.
* in TRAITS Workshop Proceedings (arXiv:2103.12679) held in conjunction
with Companion of the 2021 ACM/IEEE International Conference on Human-Robot
Interaction, March 2021, Pages 709-711