Abstract:Purpose: Generative artificial intelligence (GenAI) has progressed in its ability and has seen explosive growth in adoption. However, the consumer's perspective on its use, particularly in specific scenarios such as financial advice, is unclear. This research develops a model of how to build trust in the advice given by GenAI when answering financial questions. Design/methodology/approach: The model is tested with survey data using structural equation modelling (SEM) and multi-group analysis (MGA). The MGA compares two scenarios, one where the consumer makes a specific question and one where a vague question is made. Findings: This research identifies that building trust for consumers is different when they ask a specific financial question in comparison to a vague one. Humanness has a different effect in the two scenarios. When a financial question is specific, human-like interaction does not strengthen trust, while (1) when a question is vague, humanness builds trust. The four ways to build trust in both scenarios are (2) human oversight and being in the loop, (3) transparency and control, (4) accuracy and usefulness and finally (5) ease of use and support. Originality/value: This research contributes to a better understanding of the consumer's perspective when using GenAI for financial questions and highlights the importance of understanding GenAI in specific contexts from specific stakeholders.
Abstract:Artificial intelligence based chatbots have brought unprecedented business potential. This study aims to explore consumers trust and response to a text-based chatbot in ecommerce, involving the moderating effects of task complexity and chatbot identity disclosure. A survey method with 299 useable responses was conducted in this research. This study adopted the ordinary least squares regression to test the hypotheses. First, the consumers perception of both the empathy and friendliness of the chatbot positively impacts their trust in it. Second, task complexity negatively moderates the relationship between friendliness and consumers trust. Third, disclosure of the text based chatbot negatively moderates the relationship between empathy and consumers trust, while it positively moderates the relationship between friendliness and consumers trust. Fourth, consumers trust in the chatbot increases their reliance on the chatbot and decreases their resistance to the chatbot in future interactions. Adopting the stimulus organism response framework, this study provides important insights on consumers perception and response to the text-based chatbot. The findings of this research also make suggestions that can increase consumers positive responses to text based chatbots. Extant studies have investigated the effects of automated bots attributes on consumers perceptions. However, the boundary conditions of these effects are largely ignored. This research is one of the first attempts to provide a deep understanding of consumers responses to a chatbot.