Abstract:Explanations are central to improving transparency, trust, and user satisfaction in recommender systems (RS), yet it remains unclear how different explanation formats (visual vs. textual) are suited to users with different personal characteristics (PCs). To this end, we report a within-subject user study (n=54) comparing visual and textual explanations and examine how explanation format and PCs jointly influence perceived control, transparency, trust, and satisfaction in an educational recommender system (ERS). Using robust mixed-effects models, we analyze the moderating effects of a wide range of PCs, including Big Five traits, need for cognition, decision making style, visualization familiarity, and technical expertise. Our results show that a well-designed visual, simple, interactive, selective, easy to understand visualization that clearly and intuitively communicates how user preferences are linked to recommendations, fosters perceived control, transparency, appropriate trust, and satisfaction in the ERS for most users, independent of their PCs. Moreover, we derive a set of guidelines to support the effective design of explanations in ERSs.
Abstract:We report on our effort to create a corpus dataset of different social context situations in an office setting for further disciplinary and interdisciplinary research in computer vision, psychology, and human-robot-interaction. For social robots to be able to behave appropriately, they need to be aware of the social context they act in. Consider, for example, a robot with the task to deliver a personal message to a person. If the person is arguing with an office mate at the time of message delivery, it might be more appropriate to delay playing the message as to respect the recipient's privacy and not to interfere with the current situation. This can only be done if the situation is classified correctly and in a second step if an appropriate behavior is chosen that fits the social situation. Our work aims to enable robots accomplishing the task of classifying social situations by creating a dataset composed of semantically annotated video scenes of office situations from television soap operas. The dataset can then serve as a basis for conducting research in both computer vision and human-robot interaction.
Abstract:This paper focuses on the identification of different algorithm-based biases in robotic behaviour and their consequences in human-robot mixed groups. We propose to develop computational models to detect episodes of microaggression, discrimination, and social exclusion informed by a) observing human coping behaviours that are used to regain social inclusion and b) using system inherent information that reveal unequal treatment of human interactants. Based on this information we can start to develop regulatory mechanisms to promote fairness and social inclusion in HRI.