Abstract:Self-disclosure, the sharing of one's thoughts and feelings, is affected by the perceived relationship between individuals. While chatbots are increasingly used for self-disclosure, the impact of a chatbot's framing on users' self-disclosure remains under-explored. We investigated how a chatbot's description of its relationship with users, particularly in terms of ephemerality, affects self-disclosure. Specifically, we compared a Familiar chatbot, presenting itself as a companion remembering past interactions, with a Stranger chatbot, presenting itself as a new, unacquainted entity in each conversation. In a mixed factorial design, participants engaged with either the Familiar or Stranger chatbot in two sessions across two days, with one conversation focusing on Emotional- and another Factual-disclosure. When Emotional-disclosure was sought in the first chatting session, Stranger-condition participants felt more comfortable self-disclosing. However, when Factual-disclosure was sought first, these differences were replaced by more enjoyment among Familiar-condition participants. Qualitative findings showed Stranger afforded anonymity and reduced judgement, whereas Familiar sometimes felt intrusive unless rapport was built via low-risk Factual-disclosure.
Abstract:The ubiquity of smartphones has led to an increase in on demand healthcare being supplied. For example, people can share their illness-related experiences with others similar to themselves, and healthcare experts can offer advice for better treatment and care for remediable, terminal and mental illnesses. As well as this human-to-human communication, there has been an increased use of human-to-computer digital health messaging, such as chatbots. These can prove advantageous as they offer synchronous and anonymous feedback without the need for a human conversational partner. However, there are many subtleties involved in human conversation that a computer agent may not properly exhibit. For example, there are various conversational styles, etiquettes, politeness strategies or empathic responses that need to be chosen appropriately for the conversation. Encouragingly, computers are social actors (CASA) posits that people apply the same social norms to computers as they would do to people. On from this, previous studies have focused on applying conversational strategies to computer agents to make them embody more favourable human characteristics. However, if a computer agent fails in this regard it can lead to negative reactions from users. Therefore, in this dissertation we describe a series of studies we carried out to lead to more effective human-to-computer digital health messaging. In our first study, we use the crowd [...] Our second study investigates the effect of a health chatbot's conversational style [...] In our final study, we investigate the format used by a chatbot when [...] In summary, we have researched how to create more effective digital health interventions starting from generating health messages, to choosing an appropriate formality of messaging, and finally to formatting messages which reference a user's previous utterances.
Abstract:With projections of ageing populations and increasing rates of dementia, there is need for professional caregivers. Assistive robots have been proposed as a solution to this, as they can assist people both physically and socially. However, caregivers often need to use acts of deception (such as misdirection or white lies) in order to ensure necessary care is provided while limiting negative impacts on the cared-for such as emotional distress or loss of dignity. We discuss such use of deception, and contextualise their use within robotics.
Abstract:Large language models (LLMs) are increasingly capable and prevalent, and can be used to produce creative content. The quality of content is influenced by the prompt used, with more specific prompts that incorporate examples generally producing better results. On from this, it could be seen that using instructions written for crowdsourcing tasks (that are specific and include examples to guide workers) could prove effective LLM prompts. To explore this, we used a previous crowdsourcing pipeline that gave examples to people to help them generate a collectively diverse corpus of motivational messages. We then used this same pipeline to generate messages using GPT-4, and compared the collective diversity of messages from: (1) crowd-writers, (2) GPT-4 using the pipeline, and (3 & 4) two baseline GPT-4 prompts. We found that the LLM prompts using the crowdsourcing pipeline caused GPT-4 to produce more diverse messages than the two baseline prompts. We also discuss implications from messages generated by both human writers and LLMs.