We have reached a practical and realistic phase in human-support dialogue agents by developing a large language model (LLM). However, when requiring expert knowledge or anticipating the utterance content using the massive size of the dialogue database, we still need help with the utterance content's effectiveness and the efficiency of its output speed, even if using LLM. Therefore, we propose a framework that uses LLM asynchronously in the part of the system that returns an appropriate response and in the part that understands the user's intention and searches the database. In particular, noting that it takes time for the robot to speak, threading related to database searches is performed while the robot is speaking.
Various studies have been conducted on human-supporting robot systems. These systems have been put to practical use over the years and are now seen in our daily lives. In particular, robots communicating smoothly with people are expected to play an active role in customer service and guidance. In this case, it is essential to determine whether the customer is satisfied with the dialog robot or not. However, it is not easy to satisfy all of the customer's requests due to the diversity of the customer's speech. In this study, we developed a dialog mechanism that prevents dialog breakdowns and keeps the customer satisfied by providing multiple scenarios for the robot to take control of the dialog. We tested it in a travel destination recommendation task at a travel agency.