Alert button
Picture for Jeff Dalton

Jeff Dalton

Alert button

Conversational Information Seeking

Jan 21, 2022
Hamed Zamani, Johanne R. Trippas, Jeff Dalton, Filip Radlinski

Figure 1 for Conversational Information Seeking
Figure 2 for Conversational Information Seeking
Figure 3 for Conversational Information Seeking
Figure 4 for Conversational Information Seeking

Conversational information seeking (CIS) is concerned with a sequence of interactions between one or more users and an information system. Interactions in CIS are primarily based on natural language dialogue, while they may include other types of interactions, such as click, touch, and body gestures. This monograph provides a thorough overview of CIS definitions, applications, interactions, interfaces, design, implementation, and evaluation. This monograph views CIS applications as including conversational search, conversational question answering, and conversational recommendation. Our aim is to provide an overview of past research related to CIS, introduce the current state-of-the-art in CIS, highlight the challenges still being faced in the community. and suggest future directions.

* Draft Version 1.0 
Viaarxiv icon

ConvAI3: Generating Clarifying Questions for Open-Domain Dialogue Systems (ClariQ)

Sep 23, 2020
Mohammad Aliannejadi, Julia Kiseleva, Aleksandr Chuklin, Jeff Dalton, Mikhail Burtsev

Figure 1 for ConvAI3: Generating Clarifying Questions for Open-Domain Dialogue Systems (ClariQ)

This document presents a detailed description of the challenge on clarifying questions for dialogue systems (ClariQ). The challenge is organized as part of the Conversational AI challenge series (ConvAI3) at Search Oriented Conversational AI (SCAI) EMNLP workshop in 2020. The main aim of the conversational systems is to return an appropriate answer in response to the user requests. However, some user requests might be ambiguous. In IR settings such a situation is handled mainly thought the diversification of the search result page. It is however much more challenging in dialogue settings with limited bandwidth. Therefore, in this challenge, we provide a common evaluation framework to evaluate mixed-initiative conversations. Participants are asked to rank clarifying questions in an information-seeking conversations. The challenge is organized in two stages where in Stage 1 we evaluate the submissions in an offline setting and single-turn conversations. Top participants of Stage 1 get the chance to have their model tested by human annotators.

Viaarxiv icon