Alert button
Picture for Hai Zhao

Hai Zhao

Alert button

High-order Semantic Role Labeling

Oct 09, 2020
Zuchao Li, Hai Zhao, Rui Wang, Kevin Parnow

Figure 1 for High-order Semantic Role Labeling
Figure 2 for High-order Semantic Role Labeling
Figure 3 for High-order Semantic Role Labeling
Figure 4 for High-order Semantic Role Labeling

Semantic role labeling is primarily used to identify predicates, arguments, and their semantic relationships. Due to the limitations of modeling methods and the conditions of pre-identified predicates, previous work has focused on the relationships between predicates and arguments and the correlations between arguments at most, while the correlations between predicates have been neglected for a long time. High-order features and structure learning were very common in modeling such correlations before the neural network era. In this paper, we introduce a high-order graph structure for the neural semantic role labeling model, which enables the model to explicitly consider not only the isolated predicate-argument pairs but also the interaction between the predicate-argument pairs. Experimental results on 7 languages of the CoNLL-2009 benchmark show that the high-order structural learning techniques are beneficial to the strong performing SRL models and further boost our baseline to achieve new state-of-the-art results.

* EMNLP 2020, ACL Findings 
Viaarxiv icon

Topic-Aware Multi-turn Dialogue Modeling

Sep 26, 2020
Yi Xu, Hai Zhao, Zhuosheng Zhang

Figure 1 for Topic-Aware Multi-turn Dialogue Modeling
Figure 2 for Topic-Aware Multi-turn Dialogue Modeling
Figure 3 for Topic-Aware Multi-turn Dialogue Modeling
Figure 4 for Topic-Aware Multi-turn Dialogue Modeling

In the retrieval-based multi-turn dialogue modeling, it remains a challenge to select the most appropriate response according to extracting salient features in context utterances. As a conversation goes on, topic shift at discourse-level naturally happens through the continuous multi-turn dialogue context. However, all known retrieval-based systems are satisfied with exploiting local topic words for context utterance representation but fail to capture such essential global topic-aware clues at discourse-level. Instead of taking topic-agnostic n-gram utterance as processing unit for matching purpose in existing systems, this paper presents a novel topic-aware solution for multi-turn dialogue modeling, which segments and extracts topic-aware utterances in an unsupervised way, so that the resulted model is capable of capturing salient topic shift at discourse-level in need and thus effectively track topic flow during multi-turn conversation. Our topic-aware modeling is implemented by a newly proposed unsupervised topic-aware segmentation algorithm and Topic-Aware Dual-attention Matching (TADAM) Network, which matches each topic segment with the response in a dual cross-attention way. Experimental results on three public datasets show TADAM can outperform the state-of-the-art method by a large margin, especially by 3.4% on E-commerce dataset that has an obvious topic shift.

Viaarxiv icon

Document-level Neural Machine Translation with Document Embeddings

Sep 16, 2020
Shu Jiang, Hai Zhao, Zuchao Li, Bao-Liang Lu

Figure 1 for Document-level Neural Machine Translation with Document Embeddings
Figure 2 for Document-level Neural Machine Translation with Document Embeddings
Figure 3 for Document-level Neural Machine Translation with Document Embeddings
Figure 4 for Document-level Neural Machine Translation with Document Embeddings

Standard neural machine translation (NMT) is on the assumption of document-level context independent. Most existing document-level NMT methods are satisfied with a smattering sense of brief document-level information, while this work focuses on exploiting detailed document-level context in terms of multiple forms of document embeddings, which is capable of sufficiently modeling deeper and richer document-level context. The proposed document-aware NMT is implemented to enhance the Transformer baseline by introducing both global and local document-level clues on the source end. Experiments show that the proposed method significantly improves the translation performance over strong baselines and other related studies.

* arXiv admin note: substantial text overlap with arXiv:1910.14528 
Viaarxiv icon

Graph-to-Sequence Neural Machine Translation

Sep 16, 2020
Sufeng Duan, Hai Zhao, Rui Wang

Figure 1 for Graph-to-Sequence Neural Machine Translation
Figure 2 for Graph-to-Sequence Neural Machine Translation
Figure 3 for Graph-to-Sequence Neural Machine Translation
Figure 4 for Graph-to-Sequence Neural Machine Translation

Neural machine translation (NMT) usually works in a seq2seq learning way by viewing either source or target sentence as a linear sequence of words, which can be regarded as a special case of graph, taking words in the sequence as nodes and relationships between words as edges. In the light of the current NMT models more or less capture graph information among the sequence in a latent way, we present a graph-to-sequence model facilitating explicit graph information capturing. In detail, we propose a graph-based SAN-based NMT model called Graph-Transformer by capturing information of subgraphs of different orders in every layers. Subgraphs are put into different groups according to their orders, and every group of subgraphs respectively reflect different levels of dependency between words. For fusing subgraph representations, we empirically explore three methods which weight different groups of subgraphs of different orders. Results of experiments on WMT14 English-German and IWSLT14 German-English show that our method can effectively boost the Transformer with an improvement of 1.1 BLEU points on WMT14 English-German dataset and 1.0 BLEU points on IWSLT14 German-English dataset.

Viaarxiv icon

Multi-span Style Extraction for Generative Reading Comprehension

Sep 15, 2020
Junjie Yang, Zhuosheng Zhang, Hai Zhao

Figure 1 for Multi-span Style Extraction for Generative Reading Comprehension
Figure 2 for Multi-span Style Extraction for Generative Reading Comprehension
Figure 3 for Multi-span Style Extraction for Generative Reading Comprehension
Figure 4 for Multi-span Style Extraction for Generative Reading Comprehension

Generative machine reading comprehension (MRC) requires a model to generate well-formed answers. For this type of MRC, answer generation method is crucial to the model performance. However, generative models, which are supposed to be the right model for the task, in generally perform poorly. At the same time, single-span extraction models have been proven effective for extractive MRC, where the answer is constrained to a single span in the passage. Nevertheless, they generally suffer from generating incomplete answers or introducing redundant words when applied to the generative MRC. Thus, we extend the single-span extraction method to multi-span, proposing a new framework which enables generative MRC to be smoothly solved as multi-span extraction. Thorough experiments demonstrate that this novel approach can alleviate the dilemma between generative models and single-span models and produce answers with better-formed syntax and semantics. We will open-source our code for the research community.

Viaarxiv icon

Filling the Gap of Utterance-aware and Speaker-aware Representation for Multi-turn Dialogue

Sep 14, 2020
Longxiang Liu, Zhuosheng Zhang, Hai Zhao, Xi Zhou, Xiang Zhou

Figure 1 for Filling the Gap of Utterance-aware and Speaker-aware Representation for Multi-turn Dialogue
Figure 2 for Filling the Gap of Utterance-aware and Speaker-aware Representation for Multi-turn Dialogue
Figure 3 for Filling the Gap of Utterance-aware and Speaker-aware Representation for Multi-turn Dialogue
Figure 4 for Filling the Gap of Utterance-aware and Speaker-aware Representation for Multi-turn Dialogue

A multi-turn dialogue is composed of multiple utterances from two or more different speaker roles. Thus utterance- and speaker-aware clues are supposed to be well captured in models. However, in the existing retrieval-based multi-turn dialogue modeling, the pre-trained language models (PrLMs) as encoder represent the dialogues coarsely by taking the pairwise dialogue history and candidate response as a whole, the hierarchical information on either utterance interrelation or speaker roles coupled in such representations is not well addressed. In this work, we propose a novel model to fill such a gap by modeling the effective utterance-aware and speaker-aware representations entailed in a dialogue history. In detail, we decouple the contextualized word representations by masking mechanisms in Transformer-based PrLM, making each word only focus on the words in current utterance, other utterances, two speaker roles (i.e., utterances of sender and utterances of receiver), respectively. Experimental results show that our method boosts the strong ELECTRA baseline substantially in four public benchmark datasets, and achieves various new state-of-the-art performance over previous methods. A series of ablation studies are conducted to demonstrate the effectiveness of our method.

* 9 pages, 2 figures, 9 tables 
Viaarxiv icon

Composing Answer from Multi-spans for Reading Comprehension

Sep 14, 2020
Zhuosheng Zhang, Yiqing Zhang, Hai Zhao, Xi Zhou, Xiang Zhou

Figure 1 for Composing Answer from Multi-spans for Reading Comprehension
Figure 2 for Composing Answer from Multi-spans for Reading Comprehension
Figure 3 for Composing Answer from Multi-spans for Reading Comprehension
Figure 4 for Composing Answer from Multi-spans for Reading Comprehension

This paper presents a novel method to generate answers for non-extraction machine reading comprehension (MRC) tasks whose answers cannot be simply extracted as one span from the given passages. Using a pointer network-style extractive decoder for such type of MRC may result in unsatisfactory performance when the ground-truth answers are given by human annotators or highly re-paraphrased from parts of the passages. On the other hand, using generative decoder cannot well guarantee the resulted answers with well-formed syntax and semantics when encountering long sentences. Therefore, to alleviate the obvious drawbacks of both sides, we propose an answer making-up method from extracted multi-spans that are learned by our model as highly confident $n$-gram candidates in the given passage. That is, the returned answers are composed of discontinuous multi-spans but not just one consecutive span in the given passages anymore. The proposed method is simple but effective: empirical experiments on MS MARCO show that the proposed method has a better performance on accurately generating long answers, and substantially outperforms two competitive typical one-span and Seq2Seq baseline decoders.

Viaarxiv icon

Syntax Role for Neural Semantic Role Labeling

Sep 12, 2020
Zuchao Li, Hai Zhao, Shexia He, Jiaxun Cai

Figure 1 for Syntax Role for Neural Semantic Role Labeling
Figure 2 for Syntax Role for Neural Semantic Role Labeling
Figure 3 for Syntax Role for Neural Semantic Role Labeling
Figure 4 for Syntax Role for Neural Semantic Role Labeling

Semantic role labeling (SRL) is dedicated to recognizing the semantic predicate-argument structure of a sentence. Previous studies in terms of traditional models have shown syntactic information can make remarkable contributions to SRL performance; however, the necessity of syntactic information was challenged by a few recent neural SRL studies that demonstrate impressive performance without syntactic backbones and suggest that syntax information becomes much less important for neural semantic role labeling, especially when paired with recent deep neural network and large-scale pre-trained language models. Despite this notion, the neural SRL field still lacks a systematic and full investigation on the relevance of syntactic information in SRL, for both dependency and both monolingual and multilingual settings. This paper intends to quantify the importance of syntactic information for neural SRL in the deep learning framework. We introduce three typical SRL frameworks (baselines), sequence-based, tree-based, and graph-based, which are accompanied by two categories of exploiting syntactic information: syntax pruning-based and syntax feature-based. Experiments are conducted on the CoNLL-2005, 2009, and 2012 benchmarks for all languages available, and results show that neural SRL models can still benefit from syntactic information under certain conditions. Furthermore, we show the quantitative significance of syntax to neural SRL models together with a thorough empirical survey using existing models.

Viaarxiv icon

Task-specific Objectives of Pre-trained Language Models for Dialogue Adaptation

Sep 10, 2020
Junlong Li, Zhuosheng Zhang, Hai Zhao, Xi Zhou, Xiang Zhou

Figure 1 for Task-specific Objectives of Pre-trained Language Models for Dialogue Adaptation
Figure 2 for Task-specific Objectives of Pre-trained Language Models for Dialogue Adaptation
Figure 3 for Task-specific Objectives of Pre-trained Language Models for Dialogue Adaptation
Figure 4 for Task-specific Objectives of Pre-trained Language Models for Dialogue Adaptation

Pre-trained Language Models (PrLMs) have been widely used as backbones in lots of Natural Language Processing (NLP) tasks. The common process of utilizing PrLMs is first pre-training on large-scale general corpora with task-independent LM training objectives, then fine-tuning on task datasets with task-specific training objectives. Pre-training in a task-independent way enables the models to learn language representations, which is universal to some extent, but fails to capture crucial task-specific features in the meantime. This will lead to an incompatibility between pre-training and fine-tuning. To address this issue, we introduce task-specific pre-training on in-domain task-related corpora with task-specific objectives. This procedure is placed between the original two stages to enhance the model understanding capacity of specific tasks. In this work, we focus on Dialogue-related Natural Language Processing (DrNLP) tasks and design a Dialogue-Adaptive Pre-training Objective (DAPO) based on some important qualities for assessing dialogues which are usually ignored by general LM pre-training objectives. PrLMs with DAPO on a large in-domain dialogue corpus are then fine-tuned for downstream DrNLP tasks. Experimental results show that models with DAPO surpass those with general LM pre-training objectives and other strong baselines on downstream DrNLP tasks.

Viaarxiv icon

Learning Universal Representations from Word to Sentence

Sep 10, 2020
Yian Li, Hai Zhao

Figure 1 for Learning Universal Representations from Word to Sentence
Figure 2 for Learning Universal Representations from Word to Sentence
Figure 3 for Learning Universal Representations from Word to Sentence
Figure 4 for Learning Universal Representations from Word to Sentence

Despite the well-developed cut-edge representation learning for language, most language representation models usually focus on specific level of linguistic unit, which cause great inconvenience when being confronted with handling multiple layers of linguistic objects in a unified way. Thus this work introduces and explores the universal representation learning, i.e., embeddings of different levels of linguistic unit in a uniform vector space through a task-independent evaluation. We present our approach of constructing analogy datasets in terms of words, phrases and sentences and experiment with multiple representation models to examine geometric properties of the learned vector space. Then we empirically verify that well pre-trained Transformer models incorporated with appropriate training settings may effectively yield universal representation. Especially, our implementation of fine-tuning ALBERT on NLI and PPDB datasets achieves the highest accuracy on analogy tasks in different language levels. Further experiments on the insurance FAQ task show effectiveness of universal representation models in real-world applications.

Viaarxiv icon