Alert button
Picture for Pedro Patron

Pedro Patron

Alert button

MIRIAM: A Multimodal Chat-Based Interface for Autonomous Systems

Mar 06, 2018
Helen Hastie, Francisco J. Chiyah Garcia, David A. Robb, Pedro Patron, Atanas Laskov

Figure 1 for MIRIAM: A Multimodal Chat-Based Interface for Autonomous Systems
Figure 2 for MIRIAM: A Multimodal Chat-Based Interface for Autonomous Systems
Figure 3 for MIRIAM: A Multimodal Chat-Based Interface for Autonomous Systems

We present MIRIAM (Multimodal Intelligent inteRactIon for Autonomous systeMs), a multimodal interface to support situation awareness of autonomous vehicles through chat-based interaction. The user is able to chat about the vehicle's plan, objectives, previous activities and mission progress. The system is mixed initiative in that it pro-actively sends messages about key events, such as fault warnings. We will demonstrate MIRIAM using SeeByte's SeeTrack command and control interface and Neptune autonomy simulator.

* 2 pages, ICMI'17, 19th ACM International Conference on Multimodal Interaction, November 13-17 2017, Glasgow, UK 
Viaarxiv icon

Explain Yourself: A Natural Language Interface for Scrutable Autonomous Robots

Mar 06, 2018
Francisco J. Chiyah Garcia, David A. Robb, Xingkun Liu, Atanas Laskov, Pedro Patron, Helen Hastie

Figure 1 for Explain Yourself: A Natural Language Interface for Scrutable Autonomous Robots
Figure 2 for Explain Yourself: A Natural Language Interface for Scrutable Autonomous Robots
Figure 3 for Explain Yourself: A Natural Language Interface for Scrutable Autonomous Robots

Autonomous systems in remote locations have a high degree of autonomy and there is a need to explain what they are doing and why in order to increase transparency and maintain trust. Here, we describe a natural language chat interface that enables vehicle behaviour to be queried by the user. We obtain an interpretable model of autonomy through having an expert 'speak out-loud' and provide explanations during a mission. This approach is agnostic to the type of autonomy model and as expert and operator are from the same user-group, we predict that these explanations will align well with the operator's mental model, increase transparency and assist with operator training.

* 2 pages. Peer reviewed position paper accepted in the Explainable Robotic Systems Workshop, ACM Human-Robot Interaction conference, March 2018, Chicago, IL USA 
Viaarxiv icon