Abstract:The ability to display rich facial expressions is crucial for human-like robotic heads. While manually defining such expressions is intricate, there already exist approaches to automatically learn them. In this work one such approach is applied to evaluate and control a robot head different from the one in the original study. To improve the mapping of facial expressions from human actors onto a robot head, it is proposed to use 3D landmarks and their pairwise distances as input to the learning algorithm instead of the previously used facial action units. Participants of an online survey preferred mappings from our proposed approach in most cases, though there are still further improvements required.
Abstract:Emotions, which are an important component of social interaction, can be studied with the help of android robots and their appearance, which is as similar to humans as possible. The production and customization of android robots is expensive and time-consuming, so it may be practical to use a digital replica. In order to investigate whether there are any perceptual differences in terms of emotions based on the difference in appearance, a robot head was digitally replicated. In an experiment, the basic emotions evaluated in a preliminary study were compared in three conditions and then statistically analyzed. It was found that apart from fear, all emotions were recognized on the real robot head. The digital head with "ideal" emotions performed better than the real head apart from the anger representation, which offers optimization potential for the real head. Contrary to expectations, significant differences between the real and the replicated head with the same emotions could only be found in the representation of surprise.
Abstract:When researching on the acceptance of robots in Human-Robot-Interaction the Uncanny Valley needs to be considered. Reusable and standardized measures for it are essential. In this paper one such questionnaire got translated into German. The translated indices got evaluated (n=140) for reliability with Cronbach's alpha. Additionally the items were tested with an exploratory and a confirmatory factor analysis for problematic correlations. The results yield a good reliability for the translated indices and showed some items that need to be further checked.
Abstract:This paper describes, how current Machine Learning (ML) techniques combined with simple rule-based animation routines make an android robot head an embodied conversational agent with ChatGPT as its core component. The android robot head is described, technical details are given of how lip-sync animation is being achieved, and general software design decisions are presented. A public presentation of the system revealed improvement opportunities that are reported and that lead our iterative implementation approach.
Abstract:Laboratories are being increasingly automated. In small laboratories individual processes can be fully automated, but this is usually not economically viable. Nevertheless, individual process steps can be performed by flexible, mobile robots to relieve the laboratory staff. As a contribution to the requirements in a life science laboratory the mobile, dextrous robot Kevin was designed by the Fraunhofer IPA research institute in Stuttgart, Germany. Kevin is a mobile service robot which is able to fulfill non-value adding activities such as transportation of labware. This paper gives an overview of Kevin's functionalities, its development process, and presents a preliminary study on how its lights and sounds improve user interaction.