Alert button
Picture for Yonghua Zhu

Yonghua Zhu

Alert button

Contrastive Learning with Logic-driven Data Augmentation for Logical Reasoning over Text

May 21, 2023
Qiming Bao, Alex Yuxuan Peng, Zhenyun Deng, Wanjun Zhong, Neset Tan, Nathan Young, Yang Chen, Yonghua Zhu, Michael Witbrock, Jiamou Liu

Figure 1 for Contrastive Learning with Logic-driven Data Augmentation for Logical Reasoning over Text
Figure 2 for Contrastive Learning with Logic-driven Data Augmentation for Logical Reasoning over Text
Figure 3 for Contrastive Learning with Logic-driven Data Augmentation for Logical Reasoning over Text
Figure 4 for Contrastive Learning with Logic-driven Data Augmentation for Logical Reasoning over Text

Pre-trained large language model (LLM) is under exploration to perform NLP tasks that may require logical reasoning. Logic-driven data augmentation for representation learning has been shown to improve the performance of tasks requiring logical reasoning, but most of these data rely on designed templates and therefore lack generalization. In this regard, we propose an AMR-based logical equivalence-driven data augmentation method (AMR-LE) for generating logically equivalent data. Specifically, we first parse a text into the form of an AMR graph, next apply four logical equivalence laws (contraposition, double negation, commutative and implication laws) on the AMR graph to construct a logically equivalent/inequivalent AMR graph, and then convert it into a logically equivalent/inequivalent sentence. To help the model to better learn these logical equivalence laws, we propose a logical equivalence-driven contrastive learning training paradigm, which aims to distinguish the difference between logical equivalence and inequivalence. Our AMR-LE (Ensemble) achieves #2 on the ReClor leaderboard https://eval.ai/web/challenges/challenge-page/503/leaderboard/1347 . Our model shows better performance on seven downstream tasks, including ReClor, LogiQA, MNLI, MRPC, RTE, QNLI, and QQP. The source code and dataset are public at https://github.com/Strong-AI-Lab/Logical-Equivalence-driven-AMR-Data-Augmentation-for-Representation-Learning .

Viaarxiv icon

Prompt-based Conservation Learning for Multi-hop Question Answering

Sep 14, 2022
Zhenyun Deng, Yonghua Zhu, Yang Chen, Qianqian Qi, Michael Witbrock, Patricia Riddle

Figure 1 for Prompt-based Conservation Learning for Multi-hop Question Answering
Figure 2 for Prompt-based Conservation Learning for Multi-hop Question Answering
Figure 3 for Prompt-based Conservation Learning for Multi-hop Question Answering
Figure 4 for Prompt-based Conservation Learning for Multi-hop Question Answering

Multi-hop question answering (QA) requires reasoning over multiple documents to answer a complex question and provide interpretable supporting evidence. However, providing supporting evidence is not enough to demonstrate that a model has performed the desired reasoning to reach the correct answer. Most existing multi-hop QA methods fail to answer a large fraction of sub-questions, even if their parent questions are answered correctly. In this paper, we propose the Prompt-based Conservation Learning (PCL) framework for multi-hop QA, which acquires new knowledge from multi-hop QA tasks while conserving old knowledge learned on single-hop QA tasks, mitigating forgetting. Specifically, we first train a model on existing single-hop QA tasks, and then freeze this model and expand it by allocating additional sub-networks for the multi-hop QA task. Moreover, to condition pre-trained language models to stimulate the kind of reasoning required for specific multi-hop questions, we learn soft prompts for the novel sub-networks to perform type-specific reasoning. Experimental results on the HotpotQA benchmark show that PCL is competitive for multi-hop QA and retains good performance on the corresponding single-hop sub-questions, demonstrating the efficacy of PCL in mitigating knowledge loss by forgetting.

* Accepted to COLING 2022 
Viaarxiv icon

Interpretable AMR-Based Question Decomposition for Multi-hop Question Answering

Jun 16, 2022
Zhenyun Deng, Yonghua Zhu, Yang Chen, Michael Witbrock, Patricia Riddle

Figure 1 for Interpretable AMR-Based Question Decomposition for Multi-hop Question Answering
Figure 2 for Interpretable AMR-Based Question Decomposition for Multi-hop Question Answering
Figure 3 for Interpretable AMR-Based Question Decomposition for Multi-hop Question Answering
Figure 4 for Interpretable AMR-Based Question Decomposition for Multi-hop Question Answering

Effective multi-hop question answering (QA) requires reasoning over multiple scattered paragraphs and providing explanations for answers. Most existing approaches cannot provide an interpretable reasoning process to illustrate how these models arrive at an answer. In this paper, we propose a Question Decomposition method based on Abstract Meaning Representation (QDAMR) for multi-hop QA, which achieves interpretable reasoning by decomposing a multi-hop question into simpler sub-questions and answering them in order. Since annotating the decomposition is expensive, we first delegate the complexity of understanding the multi-hop question to an AMR parser. We then achieve the decomposition of a multi-hop question via segmentation of the corresponding AMR graph based on the required reasoning type. Finally, we generate sub-questions using an AMR-to-Text generation model and answer them with an off-the-shelf QA model. Experimental results on HotpotQA demonstrate that our approach is competitive for interpretable reasoning and that the sub-questions generated by QDAMR are well-formed, outperforming existing question-decomposition-based multi-hop QA approaches.

* Accepted by IJCAI 2022 
Viaarxiv icon