Alert button
Picture for Zhengliang Shi

Zhengliang Shi

Alert button

RADE: Reference-Assisted Dialogue Evaluation for Open-Domain Dialogue

Sep 18, 2023
Zhengliang Shi, Weiwei Sun, Shuo Zhang, Zhen Zhang, Pengjie Ren, Zhaochun Ren

Figure 1 for RADE: Reference-Assisted Dialogue Evaluation for Open-Domain Dialogue
Figure 2 for RADE: Reference-Assisted Dialogue Evaluation for Open-Domain Dialogue
Figure 3 for RADE: Reference-Assisted Dialogue Evaluation for Open-Domain Dialogue
Figure 4 for RADE: Reference-Assisted Dialogue Evaluation for Open-Domain Dialogue

Evaluating open-domain dialogue systems is challenging for reasons such as the one-to-many problem, i.e., many appropriate responses other than just the golden response. As of now, automatic evaluation methods need better consistency with humans, while reliable human evaluation can be time- and cost-intensive. To this end, we propose the Reference-Assisted Dialogue Evaluation (RADE) approach under the multi-task learning framework, which leverages the pre-created utterance as reference other than the gold response to relief the one-to-many problem. Specifically, RADE explicitly compares reference and the candidate response to predict their overall scores. Moreover, an auxiliary response generation task enhances prediction via a shared encoder. To support RADE, we extend three datasets with additional rated responses other than just a golden response by human annotation. Experiments on our three datasets and two existing benchmarks demonstrate the effectiveness of our method, where Pearson, Spearman, and Kendall correlations with human evaluation outperform state-of-the-art baselines.

* 19 pages, Accepted by ACL2023 main conference 
Viaarxiv icon

Confucius: Iterative Tool Learning from Introspection Feedback by Easy-to-Difficult Curriculum

Aug 27, 2023
Shen Gao, Zhengliang Shi, Minghang Zhu, Bowen Fang, Xin Xin, Pengjie Ren, Zhumin Chen, Jun Ma

Figure 1 for Confucius: Iterative Tool Learning from Introspection Feedback by Easy-to-Difficult Curriculum
Figure 2 for Confucius: Iterative Tool Learning from Introspection Feedback by Easy-to-Difficult Curriculum
Figure 3 for Confucius: Iterative Tool Learning from Introspection Feedback by Easy-to-Difficult Curriculum
Figure 4 for Confucius: Iterative Tool Learning from Introspection Feedback by Easy-to-Difficult Curriculum

Augmenting large language models (LLMs) with external tools has emerged as a promising approach to extending the capability of LLMs. Although some works employ open-source LLMs for the tool learning task, most of them are trained in a controlled environment in which LLMs only learn to execute the human-provided tools. However, selecting proper tools from the large toolset is also a crucial ability for the tool learning model to be applied in real-world applications. Existing methods usually directly employ self-instruction methods to train the model, which ignores differences in tool complexity. In this paper, we propose the Confucius, a novel tool learning framework to train LLM to use complicated tools in real-world scenarios, which contains two main phases: (1) We first propose a multi-stage learning method to teach the LLM to use various tools from an easy-to-difficult curriculum; (2) thenceforth, we propose the Iterative Self-instruct from Introspective Feedback (ISIF) to dynamically construct the dataset to improve the ability to use the complicated tool. Extensive experiments conducted on both controlled and real-world settings demonstrate the superiority of our tool learning framework in the real-world application scenarios compared to both tuning-free (e.g. ChatGPT, Claude) and tuning-based baselines (e.g. GPT4Tools).

Viaarxiv icon

Contrastive Learning Reduces Hallucination in Conversations

Dec 20, 2022
Weiwei Sun, Zhengliang Shi, Shen Gao, Pengjie Ren, Maarten de Rijke, Zhaochun Ren

Figure 1 for Contrastive Learning Reduces Hallucination in Conversations
Figure 2 for Contrastive Learning Reduces Hallucination in Conversations
Figure 3 for Contrastive Learning Reduces Hallucination in Conversations
Figure 4 for Contrastive Learning Reduces Hallucination in Conversations

Pre-trained language models (LMs) store knowledge in their parameters and can generate informative responses when used in conversational systems. However, LMs suffer from the problem of "hallucination:" they may generate plausible-looking statements that are irrelevant or factually incorrect. To address this problem, we propose a contrastive learning scheme, named MixCL. A novel mixed contrastive objective is proposed to explicitly optimize the implicit knowledge elicitation process of LMs, and thus reduce their hallucination in conversations. We also examine negative sampling strategies of retrieved hard negatives and model-generated negatives. We conduct experiments on Wizard-of-Wikipedia, a public, open-domain knowledge-grounded dialogue benchmark, and assess the effectiveness of MixCL. MixCL effectively reduces the hallucination of LMs in conversations and achieves the highest performance among LM-based dialogue agents in terms of relevancy and factuality. We show that MixCL achieves comparable performance to state-of-the-art KB-based approaches while enjoying notable advantages in terms of efficiency and scalability.

* Accepted by AAAI2023 
Viaarxiv icon