Picture for Hai Zhao

Hai Zhao

Department of Computer Science and Engineering, Shanghai Jiao Tong University, Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering, Shanghai Jiao Tong University, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University

High-order Semantic Role Labeling

Add code
Oct 09, 2020
Figure 1 for High-order Semantic Role Labeling
Figure 2 for High-order Semantic Role Labeling
Figure 3 for High-order Semantic Role Labeling
Figure 4 for High-order Semantic Role Labeling
Viaarxiv icon

Topic-Aware Multi-turn Dialogue Modeling

Add code
Sep 26, 2020
Figure 1 for Topic-Aware Multi-turn Dialogue Modeling
Figure 2 for Topic-Aware Multi-turn Dialogue Modeling
Figure 3 for Topic-Aware Multi-turn Dialogue Modeling
Figure 4 for Topic-Aware Multi-turn Dialogue Modeling
Viaarxiv icon

Document-level Neural Machine Translation with Document Embeddings

Add code
Sep 16, 2020
Figure 1 for Document-level Neural Machine Translation with Document Embeddings
Figure 2 for Document-level Neural Machine Translation with Document Embeddings
Figure 3 for Document-level Neural Machine Translation with Document Embeddings
Figure 4 for Document-level Neural Machine Translation with Document Embeddings
Viaarxiv icon

Graph-to-Sequence Neural Machine Translation

Add code
Sep 16, 2020
Figure 1 for Graph-to-Sequence Neural Machine Translation
Figure 2 for Graph-to-Sequence Neural Machine Translation
Figure 3 for Graph-to-Sequence Neural Machine Translation
Figure 4 for Graph-to-Sequence Neural Machine Translation
Viaarxiv icon

Multi-span Style Extraction for Generative Reading Comprehension

Add code
Sep 15, 2020
Figure 1 for Multi-span Style Extraction for Generative Reading Comprehension
Figure 2 for Multi-span Style Extraction for Generative Reading Comprehension
Figure 3 for Multi-span Style Extraction for Generative Reading Comprehension
Figure 4 for Multi-span Style Extraction for Generative Reading Comprehension
Viaarxiv icon

Filling the Gap of Utterance-aware and Speaker-aware Representation for Multi-turn Dialogue

Add code
Sep 14, 2020
Figure 1 for Filling the Gap of Utterance-aware and Speaker-aware Representation for Multi-turn Dialogue
Figure 2 for Filling the Gap of Utterance-aware and Speaker-aware Representation for Multi-turn Dialogue
Figure 3 for Filling the Gap of Utterance-aware and Speaker-aware Representation for Multi-turn Dialogue
Figure 4 for Filling the Gap of Utterance-aware and Speaker-aware Representation for Multi-turn Dialogue
Viaarxiv icon

Composing Answer from Multi-spans for Reading Comprehension

Add code
Sep 14, 2020
Figure 1 for Composing Answer from Multi-spans for Reading Comprehension
Figure 2 for Composing Answer from Multi-spans for Reading Comprehension
Figure 3 for Composing Answer from Multi-spans for Reading Comprehension
Figure 4 for Composing Answer from Multi-spans for Reading Comprehension
Viaarxiv icon

Syntax Role for Neural Semantic Role Labeling

Add code
Sep 12, 2020
Figure 1 for Syntax Role for Neural Semantic Role Labeling
Figure 2 for Syntax Role for Neural Semantic Role Labeling
Figure 3 for Syntax Role for Neural Semantic Role Labeling
Figure 4 for Syntax Role for Neural Semantic Role Labeling
Viaarxiv icon

Task-specific Objectives of Pre-trained Language Models for Dialogue Adaptation

Add code
Sep 10, 2020
Figure 1 for Task-specific Objectives of Pre-trained Language Models for Dialogue Adaptation
Figure 2 for Task-specific Objectives of Pre-trained Language Models for Dialogue Adaptation
Figure 3 for Task-specific Objectives of Pre-trained Language Models for Dialogue Adaptation
Figure 4 for Task-specific Objectives of Pre-trained Language Models for Dialogue Adaptation
Viaarxiv icon

Learning Universal Representations from Word to Sentence

Add code
Sep 10, 2020
Figure 1 for Learning Universal Representations from Word to Sentence
Figure 2 for Learning Universal Representations from Word to Sentence
Figure 3 for Learning Universal Representations from Word to Sentence
Figure 4 for Learning Universal Representations from Word to Sentence
Viaarxiv icon