Alert button
Picture for Hai Zhao

Hai Zhao

Alert button

A Smart Sliding Chinese Pinyin Input Method Editor on Touchscreen

Sep 03, 2019
Zhuosheng Zhang, Zhen Meng, Hai Zhao

Figure 1 for A Smart Sliding Chinese Pinyin Input Method Editor on Touchscreen
Figure 2 for A Smart Sliding Chinese Pinyin Input Method Editor on Touchscreen
Figure 3 for A Smart Sliding Chinese Pinyin Input Method Editor on Touchscreen
Figure 4 for A Smart Sliding Chinese Pinyin Input Method Editor on Touchscreen

This paper presents a smart sliding Chinese pinyin Input Method Editor (IME) for touchscreen devices which allows user finger sliding from one key to another on the touchscreen instead of tapping keys one by one, while the target Chinese character sequence will be predicted during the sliding process to help user input Chinese characters efficiently. Moreover, the layout of the virtual keyboard of our IME adapts to user sliding for more efficient inputting. The layout adaption process is utilized with Recurrent Neural Networks (RNN) and deep reinforcement learning. The pinyin-to-character converter is implemented with a sequence-to-sequence (Seq2Seq) model to predict the target Chinese sequence. A sliding simulator is built to automatically produce sliding samples for model training and virtual keyboard test. The key advantage of our proposed IME is that nearly all its built-in tactics can be optimized automatically with deep learning algorithms only following user behavior. Empirical studies verify the effectiveness of the proposed model and show a better user input efficiency.

Viaarxiv icon

SG-Net: Syntax-Guided Machine Reading Comprehension

Sep 03, 2019
Zhuosheng Zhang, Yuwei Wu, Junru Zhou, Sufeng Duan, Hai Zhao, Rui Wang

Figure 1 for SG-Net: Syntax-Guided Machine Reading Comprehension
Figure 2 for SG-Net: Syntax-Guided Machine Reading Comprehension
Figure 3 for SG-Net: Syntax-Guided Machine Reading Comprehension
Figure 4 for SG-Net: Syntax-Guided Machine Reading Comprehension

For machine reading comprehension, the capacity of effectively modeling the linguistic knowledge from the detail-riddled and lengthy passages and getting ride of the noises is essential to improve its performance. Traditional attentive models attend to all words without explicit constraint, which results in inaccurate concentration on some dispensable words. In this work, we propose using syntax to guide the text modeling of both passages and questions by incorporating explicit syntactic constraints into attention mechanism for better linguistically motivated word representations. To serve such a purpose, we propose a novel dual contextual architecture called syntax-guided network (SG-Net), which consists of a BERT context vector and a syntax-guided context vector, to provide more fine-grained representation. Extensive experiments on popular benchmarks including SQuAD 2.0 and RACE show that the proposed approach achieves a substantial and significant improvement over the fine-tuned BERT baseline.

Viaarxiv icon

Open Named Entity Modeling from Embedding Distribution

Aug 31, 2019
Ying Luo, Hai Zhao, Tao Wang, Linlin Li, Luo Si

Figure 1 for Open Named Entity Modeling from Embedding Distribution
Figure 2 for Open Named Entity Modeling from Embedding Distribution
Figure 3 for Open Named Entity Modeling from Embedding Distribution
Figure 4 for Open Named Entity Modeling from Embedding Distribution

In this paper, we report our discovery on named entity distribution in general word embedding space, which helps an open definition on multilingual named entity definition rather than previous closed and constraint definition on named entities through a named entity dictionary, which is usually derived from huaman labor and replies on schedual update. Our initial visualization of monolingual word embeddings indicates named entities tend to gather together despite of named entity types and language difference, which enable us to model all named entities using a specific geometric structure inside embedding space,namely, the named entity hypersphere. For monolingual case, the proposed named entity model gives an open description on diverse named entity types and different languages. For cross-lingual case, mapping the proposed named entity model provides a novel way to build named entity dataset for resource-poor languages. At last, the proposed named entity model may be shown as a very useful clue to significantly enhance state-of-the-art named entity recognition systems generally.

Viaarxiv icon

Named Entity Recognition Only from Word Embeddings

Aug 31, 2019
Ying Luo, Hai Zhao, Junlang Zhan

Figure 1 for Named Entity Recognition Only from Word Embeddings
Figure 2 for Named Entity Recognition Only from Word Embeddings
Figure 3 for Named Entity Recognition Only from Word Embeddings
Figure 4 for Named Entity Recognition Only from Word Embeddings

Deep neural network models have helped named entity (NE) recognition achieve amazing performance without handcrafting features. However, existing systems require large amounts of human annotated training data. Efforts have been made to replace human annotations with external knowledge (e.g., NE dictionary, part-of-speech tags), while it is another challenge to obtain such effective resources. In this work, we propose a fully unsupervised NE recognition model which only needs to take informative clues from pre-trained word embeddings. We first apply Gaussian Hidden Markov Model and Deep Autoencoding Gaussian Mixture Model on word embeddings for entity span detection and type prediction, and then further design an instance selector based on reinforcement learning to distinguish positive sentences from noisy sentences and refine these coarse-grained annotations through neural networks. Extensive experiments on CoNLL benchmark datasets demonstrate that our proposed light NE recognition model achieves remarkable performance without using any annotated lexicon or corpus.

Viaarxiv icon

Parsing All: Syntax and Semantics, Dependencies and Spans

Aug 30, 2019
Junru Zhou, Zuchao Li, Hai Zhao

Figure 1 for Parsing All: Syntax and Semantics, Dependencies and Spans
Figure 2 for Parsing All: Syntax and Semantics, Dependencies and Spans
Figure 3 for Parsing All: Syntax and Semantics, Dependencies and Spans
Figure 4 for Parsing All: Syntax and Semantics, Dependencies and Spans

Both syntactic and semantic structures are key linguistic contextual clues, in which parsing the latter has been well shown beneficial from parsing the former. However, few works ever made an attempt to let semantic parsing help syntactic parsing. As linguistic representation formalisms, both syntax and semantics may be represented in either span (constituent/phrase) or dependency, on both of which joint learning was also seldom explored. In this paper, we propose a novel joint model of syntactic and semantic parsing on both span and dependency representations, which incorporates syntactic information effectively in the encoder of neural network and benefits from two representation formalisms in a uniform way. The experiments show that semantics and syntax can benefit each other by optimizing joint objectives. Our single model achieves new state-of-the-art or competitive results on both span and dependency semantic parsing on Propbank benchmarks and both dependency and constituent syntactic parsing on Penn Treebank.

* arXiv admin note: text overlap with arXiv:1907.02684 
Viaarxiv icon

DCMN+: Dual Co-Matching Network for Multi-choice Reading Comprehension

Aug 30, 2019
Shuailiang Zhang, Hai Zhao, Yuwei Wu, Zhuosheng Zhang, Xi Zhou, Xiang Zhou

Figure 1 for DCMN+: Dual Co-Matching Network for Multi-choice Reading Comprehension
Figure 2 for DCMN+: Dual Co-Matching Network for Multi-choice Reading Comprehension
Figure 3 for DCMN+: Dual Co-Matching Network for Multi-choice Reading Comprehension
Figure 4 for DCMN+: Dual Co-Matching Network for Multi-choice Reading Comprehension

Multi-choice reading comprehension is a challenging task to select an answer from a set of candidate options when given passage and question. Previous approaches usually only calculate question-aware passage representation and ignore passage-aware question representation when modeling the relationship between passage and question, which obviously cannot take the best of information between passage and question. In this work, we propose dual co-matching network (DCMN) which models the relationship among passage, question and answer options bidirectionally. Besides, inspired by how human solve multi-choice questions, we integrate two reading strategies into our model: (i) passage sentence selection that finds the most salient supporting sentences to answer the question, (ii) answer option interaction that encodes the comparison information between answer options. DCMN integrated with the two strategies (DCMN+) obtains state-of-the-art results on five multi-choice reading comprehension datasets which are from different domains: RACE, SemEval-2018 Task 11, ROCStories, COIN, MCTest.

Viaarxiv icon

Memorizing All for Implicit Discourse Relation Recognition

Aug 29, 2019
Hongxiao Bai, Hai Zhao, Junhan Zhao

Figure 1 for Memorizing All for Implicit Discourse Relation Recognition
Figure 2 for Memorizing All for Implicit Discourse Relation Recognition
Figure 3 for Memorizing All for Implicit Discourse Relation Recognition
Figure 4 for Memorizing All for Implicit Discourse Relation Recognition

Implicit discourse relation recognition is a challenging task due to the absence of the necessary informative clue from explicit connectives. The prediction of relations requires a deep understanding of the semantic meanings of sentence pairs. As implicit discourse relation recognizer has to carefully tackle the semantic similarity of the given sentence pairs and the severe data sparsity issue exists in the meantime, it is supposed to be beneficial from mastering the entire training data. Thus in this paper, we propose a novel memory mechanism to tackle the challenges for further performance improvement. The memory mechanism is adequately memorizing information by pairing representations and discourse relations of all training instances, which right fills the slot of the data-hungry issue in the current implicit discourse relation recognizer. Our experiments show that our full model with memorizing the entire training set reaches new state-of-the-art against strong baselines, which especially for the first time exceeds the milestone of 60% accuracy in the 4-way task.

Viaarxiv icon

Dual Skew Divergence Loss for Neural Machine Translation

Aug 22, 2019
Fengshun Xiao, Yingting Wu, Hai Zhao, Rui Wang, Shu Jiang

Figure 1 for Dual Skew Divergence Loss for Neural Machine Translation
Figure 2 for Dual Skew Divergence Loss for Neural Machine Translation
Figure 3 for Dual Skew Divergence Loss for Neural Machine Translation
Figure 4 for Dual Skew Divergence Loss for Neural Machine Translation

For neural sequence model training, maximum likelihood (ML) has been commonly adopted to optimize model parameters with respect to the corresponding objective. However, in the case of sequence prediction tasks like neural machine translation (NMT), training with the ML-based cross entropy loss would often lead to models that overgeneralize and plunge into local optima. In this paper, we propose an extended loss function called dual skew divergence (DSD), which aims to give a better tradeoff between generalization ability and error avoidance during NMT training. Our empirical study indicates that switching to DSD loss after the convergence of ML training helps the model skip the local optimum and stimulates a stable performance improvement. The evaluations on WMT 2014 English-German and English-French translation tasks demonstrate that the proposed loss indeed helps bring about better translation performance than several baselines.

* 9pages 
Viaarxiv icon

Concurrent Parsing of Constituency and Dependency

Aug 18, 2019
Junru Zhou, Shuailiang Zhang, Hai Zhao

Figure 1 for Concurrent Parsing of Constituency and Dependency
Figure 2 for Concurrent Parsing of Constituency and Dependency
Figure 3 for Concurrent Parsing of Constituency and Dependency
Figure 4 for Concurrent Parsing of Constituency and Dependency

Constituent and dependency representation for syntactic structure share a lot of linguistic and computational characteristics, this paper thus makes the first attempt by introducing a new model that is capable of parsing constituent and dependency at the same time, so that lets either of the parsers enhance each other. Especially, we evaluate the effect of different shared network components and empirically verify that dependency parsing may be much more beneficial from constituent parsing structure. The proposed parser achieves new state-of-the-art performance for both parsing tasks, constituent and dependency on PTB and CTB benchmarks.

Viaarxiv icon