Alert button
Picture for Hai Zhao

Hai Zhao

Alert button

High-order Semantic Role Labeling

Add code
Bookmark button
Alert button
Oct 09, 2020
Zuchao Li, Hai Zhao, Rui Wang, Kevin Parnow

Figure 1 for High-order Semantic Role Labeling
Figure 2 for High-order Semantic Role Labeling
Figure 3 for High-order Semantic Role Labeling
Figure 4 for High-order Semantic Role Labeling
Viaarxiv icon

Topic-Aware Multi-turn Dialogue Modeling

Add code
Bookmark button
Alert button
Sep 26, 2020
Yi Xu, Hai Zhao, Zhuosheng Zhang

Figure 1 for Topic-Aware Multi-turn Dialogue Modeling
Figure 2 for Topic-Aware Multi-turn Dialogue Modeling
Figure 3 for Topic-Aware Multi-turn Dialogue Modeling
Figure 4 for Topic-Aware Multi-turn Dialogue Modeling
Viaarxiv icon

Document-level Neural Machine Translation with Document Embeddings

Add code
Bookmark button
Alert button
Sep 16, 2020
Shu Jiang, Hai Zhao, Zuchao Li, Bao-Liang Lu

Figure 1 for Document-level Neural Machine Translation with Document Embeddings
Figure 2 for Document-level Neural Machine Translation with Document Embeddings
Figure 3 for Document-level Neural Machine Translation with Document Embeddings
Figure 4 for Document-level Neural Machine Translation with Document Embeddings
Viaarxiv icon

Graph-to-Sequence Neural Machine Translation

Add code
Bookmark button
Alert button
Sep 16, 2020
Sufeng Duan, Hai Zhao, Rui Wang

Figure 1 for Graph-to-Sequence Neural Machine Translation
Figure 2 for Graph-to-Sequence Neural Machine Translation
Figure 3 for Graph-to-Sequence Neural Machine Translation
Figure 4 for Graph-to-Sequence Neural Machine Translation
Viaarxiv icon

Multi-span Style Extraction for Generative Reading Comprehension

Add code
Bookmark button
Alert button
Sep 15, 2020
Junjie Yang, Zhuosheng Zhang, Hai Zhao

Figure 1 for Multi-span Style Extraction for Generative Reading Comprehension
Figure 2 for Multi-span Style Extraction for Generative Reading Comprehension
Figure 3 for Multi-span Style Extraction for Generative Reading Comprehension
Figure 4 for Multi-span Style Extraction for Generative Reading Comprehension
Viaarxiv icon

Filling the Gap of Utterance-aware and Speaker-aware Representation for Multi-turn Dialogue

Add code
Bookmark button
Alert button
Sep 14, 2020
Longxiang Liu, Zhuosheng Zhang, Hai Zhao, Xi Zhou, Xiang Zhou

Figure 1 for Filling the Gap of Utterance-aware and Speaker-aware Representation for Multi-turn Dialogue
Figure 2 for Filling the Gap of Utterance-aware and Speaker-aware Representation for Multi-turn Dialogue
Figure 3 for Filling the Gap of Utterance-aware and Speaker-aware Representation for Multi-turn Dialogue
Figure 4 for Filling the Gap of Utterance-aware and Speaker-aware Representation for Multi-turn Dialogue
Viaarxiv icon

Composing Answer from Multi-spans for Reading Comprehension

Add code
Bookmark button
Alert button
Sep 14, 2020
Zhuosheng Zhang, Yiqing Zhang, Hai Zhao, Xi Zhou, Xiang Zhou

Figure 1 for Composing Answer from Multi-spans for Reading Comprehension
Figure 2 for Composing Answer from Multi-spans for Reading Comprehension
Figure 3 for Composing Answer from Multi-spans for Reading Comprehension
Figure 4 for Composing Answer from Multi-spans for Reading Comprehension
Viaarxiv icon

Syntax Role for Neural Semantic Role Labeling

Add code
Bookmark button
Alert button
Sep 12, 2020
Zuchao Li, Hai Zhao, Shexia He, Jiaxun Cai

Figure 1 for Syntax Role for Neural Semantic Role Labeling
Figure 2 for Syntax Role for Neural Semantic Role Labeling
Figure 3 for Syntax Role for Neural Semantic Role Labeling
Figure 4 for Syntax Role for Neural Semantic Role Labeling
Viaarxiv icon

Task-specific Objectives of Pre-trained Language Models for Dialogue Adaptation

Add code
Bookmark button
Alert button
Sep 10, 2020
Junlong Li, Zhuosheng Zhang, Hai Zhao, Xi Zhou, Xiang Zhou

Figure 1 for Task-specific Objectives of Pre-trained Language Models for Dialogue Adaptation
Figure 2 for Task-specific Objectives of Pre-trained Language Models for Dialogue Adaptation
Figure 3 for Task-specific Objectives of Pre-trained Language Models for Dialogue Adaptation
Figure 4 for Task-specific Objectives of Pre-trained Language Models for Dialogue Adaptation
Viaarxiv icon

Learning Universal Representations from Word to Sentence

Add code
Bookmark button
Alert button
Sep 10, 2020
Yian Li, Hai Zhao

Figure 1 for Learning Universal Representations from Word to Sentence
Figure 2 for Learning Universal Representations from Word to Sentence
Figure 3 for Learning Universal Representations from Word to Sentence
Figure 4 for Learning Universal Representations from Word to Sentence
Viaarxiv icon