Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations and is essential for the development of communicative social agents. In this paper, we introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding. Compared with previous works that treat different figurative expressions (e.g. metaphor, sarcasm) as individual tasks, DiPlomat provides a cohesive framework towards general pragmatic understanding. Our dataset is created through the utilization of Amazon Mechanical Turk ( AMT ), resulting in a total of 4, 177 multi-turn dialogues. In conjunction with the dataset, we propose two tasks, Pragmatic Identification and Reasoning (PIR) and Conversational Question Answering (CQA). Experimental results with state-of-the-art (SOTA) neural architectures reveal several significant findings: 1) large language models ( LLMs) exhibit poor performance in tackling this subjective domain; 2) comprehensive comprehension of context emerges as a critical factor for establishing benign human-machine interactions; 3) current models defect in the application of pragmatic reasoning. As a result, we call on more attention to improve the ability of context understanding, reasoning, and implied meaning modeling.
Pragmatic reasoning aims at resolving implicit meanings that commonly occur in real-life and is crucial for building communicative social agents. We introduce a new benchmark, Diplomat, aiming at a unified paradigm for pragmatic reasoning and situated conversational understanding. Compared with previous works that treat different figurative expressions (e.g., metaphor, sarcasm) as individual tasks, Diplomat provides a unified understanding towards general pragmatic understanding. Our dataset is created using Amazon Mechanical Turk ( AMT ), resulting in 4, 177 multi-turn dialogues. In company with the dataset, we propose two tasks: Pragmatic Identification and Reasoning and Conversational Question Answering. Experimental results with state-of-the-art (SOTA) neural architectures demonstrate that: 1) large language models ( LLMs) show poor performances in this subjective topic. 2) Context understanding is a crucial factor in building benign human-machine interaction. 3) Current models defect in the application of pragmatic reasoning. As a result, we call on more attention to improve the ability of context understanding, reasoning and implied meaning modeling.