We describe an approach for aligning an LLM-based dialogue agent based on global (i.e., dialogue-level) rewards, while also taking into account naturally-occurring multimodal signals. At a high level, our approach (dubbed GELI) learns a local, turn-level reward model by decomposing the human-provided Global Explicit (GE) session-level reward, using Local Implicit (LI} multimodal reward signals to crossmodally shape the reward decomposition step. This decomposed reward model is then used as part of the standard RHLF pipeline improve an LLM-based dialog agent. We run quantitative and qualitative human studies to evaluate the performance of our GELI approach, and find that it shows consistent improvements across various conversational metrics compared to baseline methods.
Large language models (LLMs) are capable of many natural language tasks, yet they are far from perfect. In health applications, grounding and interpreting domain-specific and non-linguistic data is important. This paper investigates the capacity of LLMs to deliver multi-modal health predictions based on contextual information (e.g. user demographics, health knowledge) and physiological data (e.g. resting heart rate, sleep minutes). We present a comprehensive evaluation of eight state-of-the-art LLMs with diverse prompting and fine-tuning techniques on six public health datasets (PM-Data, LifeSnaps, GLOBEM, AW_FB, MIT-BIH & MIMIC-III). Our experiments cover thirteen consumer health prediction tasks in mental health, activity, metabolic, sleep, and cardiac assessment. Our fine-tuned model, Health-Alpaca exhibits comparable performance to larger models (GPT-3.5 and GPT-4), achieving the best performance in 5 out of 13 tasks. Ablation studies highlight the effectiveness of context enhancement strategies, and generalization capability of the fine-tuned models across training datasets and the size of training samples. Notably, we observe that our context enhancement can yield up to 23.8% improvement in performance. While constructing contextually rich prompts (combining user context, health knowledge and temporal information) exhibits synergistic improvement, the inclusion of health knowledge context in prompts significantly enhances overall performance.
In this paper, we introduce a novel conceptual model for a robot's behavioral adaptation in its long-term interaction with humans, integrating dynamic robot role adaptation with principles of flow experience from psychology. This conceptualization introduces a hierarchical interaction objective grounded in the flow experience, serving as the overarching adaptation goal for the robot. This objective intertwines both cognitive and affective sub-objectives and incorporates individual and group-level human factors. The dynamic role adaptation approach is a cornerstone of our model, highlighting the robot's ability to fluidly adapt its support roles - from leader to follower - with the aim of maintaining equilibrium between activity challenge and user skill, thereby fostering the user's optimal flow experiences. Moreover, this work delves into a comprehensive exploration of the limitations and potential applications of our proposed conceptualization. Our model places a particular emphasis on the multi-person HRI paradigm, a dimension of HRI that is both under-explored and challenging. In doing so, we aspire to extend the applicability and relevance of our conceptualization within the HRI field, contributing to the future development of adaptive social robots capable of sustaining long-term interactions with humans.
The most meaningful connections between people are often fostered through expression of shared vulnerability and emotional experiences in personal narratives. We introduce a new task of identifying similarity in personal stories based on empathic resonance, i.e., the extent to which two people empathize with each others' experiences, as opposed to raw semantic or lexical similarity, as has predominantly been studied in NLP. Using insights from social psychology, we craft a framework that operationalizes empathic similarity in terms of three key features of stories: main events, emotional trajectories, and overall morals or takeaways. We create EmpathicStories, a dataset of 1,500 personal stories annotated with our empathic similarity features, and 2,000 pairs of stories annotated with empathic similarity scores. Using our dataset, we fine-tune a model to compute empathic similarity of story pairs, and show that this outperforms semantic similarity models on automated correlation and retrieval metrics. Through a user study with 150 participants, we also assess the effect our model has on retrieving stories that users empathize with, compared to naive semantic similarity-based retrieval, and find that participants empathized significantly more with stories retrieved by our model. Our work has strong implications for the use of empathy-aware models to foster human connection and empathy between people.
Accurately modeling affect dynamics, which refers to the changes and fluctuations in emotions and affective displays during human conversations, is crucial for understanding human interactions. By analyzing affect dynamics, we can gain insights into how people communicate, respond to different situations, and form relationships. However, modeling affect dynamics is challenging due to contextual factors, such as the complex and nuanced nature of interpersonal relationships, the situation, and other factors that influence affective displays. To address this challenge, we propose a Cross-person Memory Transformer (CPM-T) framework which is able to explicitly model affective dynamics (intrapersonal and interpersonal influences) by identifying verbal and non-verbal cues, and with a large language model to utilize the pre-trained knowledge and perform verbal reasoning. The CPM-T framework maintains memory modules to store and update the contexts within the conversation window, enabling the model to capture dependencies between earlier and later parts of a conversation. Additionally, our framework employs cross-modal attention to effectively align information from multi-modalities and leverage cross-person attention to align behaviors in multi-party interactions. We evaluate the effectiveness and generalizability of our approach on three publicly available datasets for joint engagement, rapport, and human beliefs prediction tasks. Remarkably, the CPM-T framework outperforms baseline models in average F1-scores by up to 7.3%, 9.3%, and 2.0% respectively. Finally, we demonstrate the importance of each component in the framework via ablation studies with respect to multimodal temporal behavior.
As we move closer to real-world AI systems, AI agents must be able to deal with multiparty (group) conversations. Recognizing and interpreting multiparty behaviors is challenging, as the system must recognize individual behavioral cues, deal with the complexity of multiple streams of data from multiple people, and recognize the subtle contingent social exchanges that take place amongst group members. To tackle this challenge, we propose the Multiparty-Transformer (Multipar-T), a transformer model for multiparty behavior modeling. The core component of our proposed approach is the Crossperson Attention, which is specifically designed to detect contingent behavior between pairs of people. We verify the effectiveness of Multipar-T on a publicly available video-based group engagement detection benchmark, where it outperforms state-of-the-art approaches in average F-1 scores by 5.2% and individual class F-1 scores by up to 10.0%. Through qualitative analysis, we show that our Crossperson Attention module is able to discover contingent behavior.
Affect understanding capability is essential for social robots to autonomously interact with a group of users in an intuitive and reciprocal way. However, the challenge of multi-person affect understanding comes from not only the accurate perception of each user's affective state (e.g., engagement) but also the recognition of the affect interplay between the members (e.g., joint engagement) that presents as complex, but subtle, nonverbal exchanges between them. Here we present a novel hybrid framework for identifying a parent-child dyad's joint engagement by combining a deep learning framework with various video augmentation techniques. Using a dataset of parent-child dyads reading storybooks together with a social robot at home, we first train RGB frame- and skeleton-based joint engagement recognition models with four video augmentation techniques (General Aug, DeepFake, CutOut, and Mixed) applied datasets to improve joint engagement classification performance. Second, we demonstrate experimental results on the use of trained models in the robot-parent-child interaction context. Third, we introduce a behavior-based metric for evaluating the learned representation of the models to investigate the model interpretability when recognizing joint engagement. This work serves as the first step toward fully unlocking the potential of end-to-end video understanding models pre-trained on large public datasets and augmented with data augmentation and visualization techniques for affect recognition in the multi-person human-robot interaction in the wild.
The prevalence of suicide has been on the rise since the 20th century, causing severe emotional damage to individuals, families, and communities alike. Despite the severity of this suicide epidemic, there is so far no reliable and systematic way to assess suicide intent of a given individual. Through efforts to automate and systematize diagnosis of mental illnesses over the past few years, verbal and acoustic behaviors have received increasing attention as biomarkers, but little has been done to study eyelids, gaze, and head pose in evaluating suicide risk. This study explores statistical analysis, feature selection, and machine learning classification as means of suicide risk evaluation and nonverbal behavioral interpretation. Applying these methods to the eye and head signals extracted from our unique dataset, this study finds that high-risk suicidal individuals experience psycho-motor retardation and symptoms of anxiety and depression, characterized by eye contact avoidance, slower blinks and a downward eye gaze. By comparing results from different methods of classification, we determined that these features are highly capable of automatically classifying different levels of suicide risk consistently and with high accuracy, above 98%. Our conclusion corroborates psychological studies, and shows great potential of a systematic approach in suicide risk evaluation that is adoptable by both healthcare providers and naive observers.
Human-robot interaction can be regarded as a flow between users and robots. Designing good interaction flows takes a lot of effort and needs to be field tested. Unfortunately, the interaction flow design process is often very disjointed, with users experiencing prototypes, designers forming those prototypes, and developers implementing them as independent processes. In this paper, we present the Interaction Flow Editor (IFE), a new human-robot interaction prototyping tool that enables everyday users to create and modify their own interactions, while still providing a full suite of features that is powerful enough for developers and designers to create complex interactions. We also discuss the Flow Engine, a flexible and adaptable framework for executing robot interaction flows authors through the IFE. Finally, we present our case study results that demonstrates how older adults, aged 70 and above, can design and iterate interactions in real-time on a robot using the IFE.
A significant number of college students suffer from mental health issues that impact their physical, social, and occupational outcomes. Various scalable technologies have been proposed in order to mitigate the negative impact of mental health disorders. However, the evaluation for these technologies, if done at all, often reports mixed results on improving users' mental health. We need to better understand the factors that align a user's attributes and needs with technology-based interventions for positive outcomes. In psychotherapy theory, therapeutic alliance and rapport between a therapist and a client is regarded as the basis for therapeutic success. In prior works, social robots have shown the potential to build rapport and a working alliance with users in various settings. In this work, we explore the use of a social robot coach to deliver positive psychology interventions to college students living in on-campus dormitories. We recruited 35 college students to participate in our study and deployed a social robot coach in their room. The robot delivered daily positive psychology sessions among other useful skills like delivering the weather forecast, scheduling reminders, etc. We found a statistically significant improvement in participants' psychological wellbeing, mood, and readiness to change behavior for improved wellbeing after they completed the study. Furthermore, students' personality traits were found to have a significant association with intervention efficacy. Analysis of the post-study interview revealed students' appreciation of the robot's companionship and their concerns for privacy.