Alert button
Picture for Patricia Riddle

Patricia Riddle

Alert button

Prompt-based Conservation Learning for Multi-hop Question Answering

Sep 14, 2022
Zhenyun Deng, Yonghua Zhu, Yang Chen, Qianqian Qi, Michael Witbrock, Patricia Riddle

Figure 1 for Prompt-based Conservation Learning for Multi-hop Question Answering
Figure 2 for Prompt-based Conservation Learning for Multi-hop Question Answering
Figure 3 for Prompt-based Conservation Learning for Multi-hop Question Answering
Figure 4 for Prompt-based Conservation Learning for Multi-hop Question Answering

Multi-hop question answering (QA) requires reasoning over multiple documents to answer a complex question and provide interpretable supporting evidence. However, providing supporting evidence is not enough to demonstrate that a model has performed the desired reasoning to reach the correct answer. Most existing multi-hop QA methods fail to answer a large fraction of sub-questions, even if their parent questions are answered correctly. In this paper, we propose the Prompt-based Conservation Learning (PCL) framework for multi-hop QA, which acquires new knowledge from multi-hop QA tasks while conserving old knowledge learned on single-hop QA tasks, mitigating forgetting. Specifically, we first train a model on existing single-hop QA tasks, and then freeze this model and expand it by allocating additional sub-networks for the multi-hop QA task. Moreover, to condition pre-trained language models to stimulate the kind of reasoning required for specific multi-hop questions, we learn soft prompts for the novel sub-networks to perform type-specific reasoning. Experimental results on the HotpotQA benchmark show that PCL is competitive for multi-hop QA and retains good performance on the corresponding single-hop sub-questions, demonstrating the efficacy of PCL in mitigating knowledge loss by forgetting.

* Accepted to COLING 2022 
Viaarxiv icon

A Theory for Knowledge Transfer in Continual Learning

Aug 14, 2022
Diana Benavides-Prado, Patricia Riddle

Figure 1 for A Theory for Knowledge Transfer in Continual Learning
Figure 2 for A Theory for Knowledge Transfer in Continual Learning

Continual learning of a stream of tasks is an active area in deep neural networks. The main challenge investigated has been the phenomenon of catastrophic forgetting or interference of newly acquired knowledge with knowledge from previous tasks. Recent work has investigated forward knowledge transfer to new tasks. Backward transfer for improving knowledge gained during previous tasks has received much less attention. There is in general limited understanding of how knowledge transfer could aid tasks learned continually. We present a theory for knowledge transfer in continual supervised learning, which considers both forward and backward transfer. We aim at understanding their impact for increasingly knowledgeable learners. We derive error bounds for each of these transfer mechanisms. These bounds are agnostic to specific implementations (e.g. deep neural networks). We demonstrate that, for a continual learner that observes related tasks, both forward and backward transfer can contribute to an increasing performance as more tasks are observed.

* Conference on Lifelong Learning Agents (CoLLAs 2022) 
Viaarxiv icon

Interpretable AMR-Based Question Decomposition for Multi-hop Question Answering

Jun 16, 2022
Zhenyun Deng, Yonghua Zhu, Yang Chen, Michael Witbrock, Patricia Riddle

Figure 1 for Interpretable AMR-Based Question Decomposition for Multi-hop Question Answering
Figure 2 for Interpretable AMR-Based Question Decomposition for Multi-hop Question Answering
Figure 3 for Interpretable AMR-Based Question Decomposition for Multi-hop Question Answering
Figure 4 for Interpretable AMR-Based Question Decomposition for Multi-hop Question Answering

Effective multi-hop question answering (QA) requires reasoning over multiple scattered paragraphs and providing explanations for answers. Most existing approaches cannot provide an interpretable reasoning process to illustrate how these models arrive at an answer. In this paper, we propose a Question Decomposition method based on Abstract Meaning Representation (QDAMR) for multi-hop QA, which achieves interpretable reasoning by decomposing a multi-hop question into simpler sub-questions and answering them in order. Since annotating the decomposition is expensive, we first delegate the complexity of understanding the multi-hop question to an AMR parser. We then achieve the decomposition of a multi-hop question via segmentation of the corresponding AMR graph based on the required reasoning type. Finally, we generate sub-questions using an AMR-to-Text generation model and answer them with an off-the-shelf QA model. Experimental results on HotpotQA demonstrate that our approach is competitive for interpretable reasoning and that the sub-questions generated by QDAMR are well-formed, outperforming existing question-decomposition-based multi-hop QA approaches.

* Accepted by IJCAI 2022 
Viaarxiv icon

Texture Modelling with Nested High-order Markov-Gibbs Random Fields

Oct 08, 2015
Ralph Versteegen, Georgy Gimel'farb, Patricia Riddle

Figure 1 for Texture Modelling with Nested High-order Markov-Gibbs Random Fields
Figure 2 for Texture Modelling with Nested High-order Markov-Gibbs Random Fields
Figure 3 for Texture Modelling with Nested High-order Markov-Gibbs Random Fields
Figure 4 for Texture Modelling with Nested High-order Markov-Gibbs Random Fields

Currently, Markov-Gibbs random field (MGRF) image models which include high-order interactions are almost always built by modelling responses of a stack of local linear filters. Actual interaction structure is specified implicitly by the filter coefficients. In contrast, we learn an explicit high-order MGRF structure by considering the learning process in terms of general exponential family distributions nested over base models, so that potentials added later can build on previous ones. We relatively rapidly add new features by skipping over the costly optimisation of parameters. We introduce the use of local binary patterns as features in MGRF texture models, and generalise them by learning offsets to the surrounding pixels. These prove effective as high-order features, and are fast to compute. Several schemes for selecting high-order features by composition or search of a small subclass are compared. Additionally we present a simple modification of the maximum likelihood as a texture modelling-specific objective function which aims to improve generalisation by local windowing of statistics. The proposed method was experimentally evaluated by learning high-order MGRF models for a broad selection of complex textures and then performing texture synthesis, and succeeded on much of the continuum from stochastic through irregularly structured to near-regular textures. Learning interaction structure is very beneficial for textures with large-scale structure, although those with complex irregular structure still provide difficulties. The texture models were also quantitatively evaluated on two tasks and found to be competitive with other works: grading of synthesised textures by a panel of observers; and comparison against several recent MGRF models by evaluation on a constrained inpainting task.

* Submitted to Computer Vision and Image Understanding 
Viaarxiv icon