Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"chatbots": models, code, and papers

A Multi-Turn Emotionally Engaging Dialog Model

Sep 14, 2019
Yubo Xie, Ekaterina Svikhnushina, Pearl Pu

Open-domain dialog systems (also known as chatbots) have increasingly drawn attention in natural language processing. Some of the recent work aims at incorporating affect information into sequence-to-sequence neural dialog modeling, making the response emotionally richer, while others use hand-crafted rules to determine the desired emotion response. However, they do not explicitly learn the subtle emotional interactions captured in human dialogs. In this paper, we propose a multi-turn dialog system aimed at learning and generating emotional responses that so far only humans know how to do. Compared with two baseline models, offline experiments show that our method performs the best in perplexity scores. Further human evaluations confirm that our chatbot can keep track of the conversation context and generate emotionally more appropriate responses while performing equally well on grammar.

  

A Conversational Interface to Improve Medication Adherence: Towards AI Support in Patient's Treatment

Mar 03, 2018
Ahmed Fadhil

Medication adherence is of utmost importance for many chronic conditions, regardless of the disease type. Engaging patients in self-tracking their medication is a big challenge. One way to potentially reduce this burden is to use reminders to promote wellness throughout all stages of life and improve medication adherence. Chatbots have proven effectiveness in triggering users to engage in certain activity, such as medication adherence. In this paper, we discuss "Roborto", a chatbot to create an engaging interactive and intelligent environment for patients and assist in positive lifestyle modification. We introduce a way for healthcare providers to track patients adherence and intervene whenever necessary. We describe the health, technical and behavioural approaches to the problem of medication non-adherence and propose a diagnostic and decision support tool. The proposed study will be implemented and validated through a pilot experiment with users to measure the efficacy of the proposed approach.

* 7 pages 
  

Towards Automated Customer Support

Sep 02, 2018
Momchil Hardalov, Ivan Koychev, Preslav Nakov

Recent years have seen growing interest in conversational agents, such as chatbots, which are a very good fit for automated customer support because the domain in which they need to operate is narrow. This interest was in part inspired by recent advances in neural machine translation, esp. the rise of sequence-to-sequence (seq2seq) and attention-based models such as the Transformer, which have been applied to various other tasks and have opened new research directions in question answering, chatbots, and conversational systems. Still, in many cases, it might be feasible and even preferable to use simple information retrieval techniques. Thus, here we compare three different models:(i) a retrieval model, (ii) a sequence-to-sequence model with attention, and (iii) Transformer. Our experiments with the Twitter Customer Support Dataset, which contains over two million posts from customer support services of twenty major brands, show that the seq2seq model outperforms the other two in terms of semantics and word overlap.

* Accepted as regular paper at AIMSA 2018 
  

Contextual Topic Modeling For Dialog Systems

Oct 19, 2018
Chandra Khatri, Rahul Goel, Behnam Hedayatnia, Angeliki Metanillou, Anushree Venkatesh, Raefer Gabriel, Arindam Mandal

Accurate prediction of conversation topics can be a valuable signal for creating coherent and engaging dialog systems. In this work, we focus on context-aware topic classification methods for identifying topics in free-form human-chatbot dialogs. We extend previous work on neural topic classification and unsupervised topic keyword detection by incorporating conversational context and dialog act features. On annotated data, we show that incorporating context and dialog acts leads to relative gains in topic classification accuracy by 35% and on unsupervised keyword detection recall by 11% for conversational interactions where topics frequently span multiple utterances. We show that topical metrics such as topical depth is highly correlated with dialog evaluation metrics such as coherence and engagement implying that conversational topic models can predict user satisfaction. Our work for detecting conversation topics and keywords can be used to guide chatbots towards coherent dialog.

  

Automatic Evaluation and Moderation of Open-domain Dialogue Systems

Nov 03, 2021
Zhang Chen, João Sadoc, Luis Fernando D'Haro, Rafael Banchs, Alexander Rudnicky

In recent years, dialogue systems have attracted significant interests in both academia and industry. Especially the discipline of open-domain dialogue systems, aka chatbots, has gained great momentum. Yet, a long standing challenge that bothers the researchers is the lack of effective automatic evaluation metrics, which results in significant impediment in the current research. Common practice in assessing the performance of open-domain dialogue models involves extensive human evaluation on the final deployed models, which is both time- and cost- intensive. Moreover, a recent trend in building open-domain chatbots involve pre-training dialogue models with a large amount of social media conversation data. However, the information contained in the social media conversations may be offensive and inappropriate. Indiscriminate usage of such data can result in insensitive and toxic generative models. This paper describes the data, baselines and results obtained for the Track 5 at the Dialogue System Technology Challenge 10 (DSTC10).

  

Benchmarking Automatic Detection of Psycholinguistic Characteristics for Better Human-Computer Interaction

Dec 18, 2020
Sanja Štajner, Seren Yenikent, Marc Franco-Salvador

When two people pay attention to each other and are interested in what the other has to say or write, they almost instantly adapt their writing/speaking style to match the other. For a successful interaction with a user, chatbots and dialog systems should be able to do the same. We propose framework consisting of five psycholinguistic textual characteristics for better human-computer interaction. We describe annotation processes for collecting the data, and benchmark five binary classification tasks, experimenting with different training sizes and model architectures. We perform experiments in English, Spanish, German, Chinese, and Arabic. The best architectures noticeably outperform several baselines and achieve macro-averaged F1-scores between 72% and 96% depending on the language and the task. Similar results are achieved even with a small amount of training data. The proposed framework proved to be fairly easy to model for various languages even with small amount of manually annotated data if right architectures are used. At the same time, it showed potential for improving user satisfaction if applied in existing commercial chatbots.

* 38 pages, 6 figures 
  

Efficient Deployment of Conversational Natural Language Interfaces over Databases

Jun 04, 2020
Anthony Colas, Trung Bui, Franck Dernoncourt, Moumita Sinha, Doo Soon Kim

Many users communicate with chatbots and AI assistants in order to help them with various tasks. A key component of the assistant is the ability to understand and answer a user's natural language questions for question-answering (QA). Because data can be usually stored in a structured manner, an essential step involves turning a natural language question into its corresponding query language. However, in order to train most natural language-to-query-language state-of-the-art models, a large amount of training data is needed first. In most domains, this data is not available and collecting such datasets for various domains can be tedious and time-consuming. In this work, we propose a novel method for accelerating the training dataset collection for developing the natural language-to-query-language machine learning models. Our system allows one to generate conversational multi-term data, where multiple turns define a dialogue session, enabling one to better utilize chatbot interfaces. We train two current state-of-the-art NL-to-QL models, on both an SQL and SPARQL-based datasets in order to showcase the adaptability and efficacy of our created data.

* Accepted at ACL-NLI 2020 
  

Efficient Deployment ofConversational Natural Language Interfaces over Databases

May 31, 2020
Anthony Colas, Trung Bui, Franck Dernoncourt, Moumita Sinha, Doo Soon Kim

Many users communicate with chatbots and AI assistants in order to help them with various tasks. A key component of the assistant is the ability to understand and answer a user's natural language questions for question-answering (QA). Because data can be usually stored in a structured manner, an essential step involves turning a natural language question into its corresponding query language. However, in order to train most natural language-to-query-language state-of-the-art models, a large amount of training data is needed first. In most domains, this data is not available and collecting such datasets for various domains can be tedious and time-consuming. In this work, we propose a novel method for accelerating the training dataset collection for developing the natural language-to-query-language machine learning models. Our system allows one to generate conversational multi-term data, where multiple turns define a dialogue session, enabling one to better utilize chatbot interfaces. We train two current state-of-the-art NL-to-QL models, on both an SQL and SPARQL-based datasets in order to showcase the adaptability and efficacy of our created data.

* Accepted at ACL-NLI 2020 
  
<<
5
6
7
8
9
10
11
12
13
14
15
16
17
>>