Alert button
Picture for Nick Tzou

Nick Tzou

Alert button

MMIU: Dataset for Visual Intent Understanding in Multimodal Assistants

Oct 31, 2021
Alkesh Patel, Joel Ruben Antony Moniz, Roman Nguyen, Nick Tzou, Hadas Kotek, Vincent Renkens

Figure 1 for MMIU: Dataset for Visual Intent Understanding in Multimodal Assistants
Figure 2 for MMIU: Dataset for Visual Intent Understanding in Multimodal Assistants
Figure 3 for MMIU: Dataset for Visual Intent Understanding in Multimodal Assistants
Figure 4 for MMIU: Dataset for Visual Intent Understanding in Multimodal Assistants

In multimodal assistant, where vision is also one of the input modalities, the identification of user intent becomes a challenging task as visual input can influence the outcome. Current digital assistants take spoken input and try to determine the user intent from conversational or device context. So, a dataset, which includes visual input (i.e. images or videos for the corresponding questions targeted for multimodal assistant use cases, is not readily available. The research in visual question answering (VQA) and visual question generation (VQG) is a great step forward. However, they do not capture questions that a visually-abled person would ask multimodal assistants. Moreover, many times questions do not seek information from external knowledge. In this paper, we provide a new dataset, MMIU (MultiModal Intent Understanding), that contains questions and corresponding intents provided by human annotators while looking at images. We, then, use this dataset for intent classification task in multimodal digital assistant. We also experiment with various approaches for combining vision and language features including the use of multimodal transformer for classification of image-question pairs into 14 intents. We provide the benchmark results and discuss the role of visual and text features for the intent classification task on our dataset.

* Extended abstract accepted for WeCNLP 2021 
Viaarxiv icon