Picture for Hai Zhao

Hai Zhao

Department of Computer Science and Engineering, Shanghai Jiao Tong University, Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering, Shanghai Jiao Tong University, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University

Sparse Fuzzy Attention for Structured Sentiment Analysis

Add code
Sep 25, 2021
Figure 1 for Sparse Fuzzy Attention for Structured Sentiment Analysis
Figure 2 for Sparse Fuzzy Attention for Structured Sentiment Analysis
Figure 3 for Sparse Fuzzy Attention for Structured Sentiment Analysis
Figure 4 for Sparse Fuzzy Attention for Structured Sentiment Analysis
Viaarxiv icon

Self- and Pseudo-self-supervised Prediction of Speaker and Key-utterance for Multi-party Dialogue Reading Comprehension

Add code
Sep 16, 2021
Figure 1 for Self- and Pseudo-self-supervised Prediction of Speaker and Key-utterance for Multi-party Dialogue Reading Comprehension
Figure 2 for Self- and Pseudo-self-supervised Prediction of Speaker and Key-utterance for Multi-party Dialogue Reading Comprehension
Figure 3 for Self- and Pseudo-self-supervised Prediction of Speaker and Key-utterance for Multi-party Dialogue Reading Comprehension
Figure 4 for Self- and Pseudo-self-supervised Prediction of Speaker and Key-utterance for Multi-party Dialogue Reading Comprehension
Viaarxiv icon

Enhanced Speaker-aware Multi-party Multi-turn Dialogue Comprehension

Add code
Sep 09, 2021
Figure 1 for Enhanced Speaker-aware Multi-party Multi-turn Dialogue Comprehension
Figure 2 for Enhanced Speaker-aware Multi-party Multi-turn Dialogue Comprehension
Figure 3 for Enhanced Speaker-aware Multi-party Multi-turn Dialogue Comprehension
Figure 4 for Enhanced Speaker-aware Multi-party Multi-turn Dialogue Comprehension
Viaarxiv icon

Smoothing Dialogue States for Open Conversational Machine Reading

Add code
Sep 02, 2021
Figure 1 for Smoothing Dialogue States for Open Conversational Machine Reading
Figure 2 for Smoothing Dialogue States for Open Conversational Machine Reading
Figure 3 for Smoothing Dialogue States for Open Conversational Machine Reading
Figure 4 for Smoothing Dialogue States for Open Conversational Machine Reading
Viaarxiv icon

Unsupervised Open-Domain Question Answering

Add code
Aug 31, 2021
Figure 1 for Unsupervised Open-Domain Question Answering
Figure 2 for Unsupervised Open-Domain Question Answering
Figure 3 for Unsupervised Open-Domain Question Answering
Figure 4 for Unsupervised Open-Domain Question Answering
Viaarxiv icon

Span Fine-tuning for Pre-trained Language Models

Add code
Aug 29, 2021
Figure 1 for Span Fine-tuning for Pre-trained Language Models
Figure 2 for Span Fine-tuning for Pre-trained Language Models
Figure 3 for Span Fine-tuning for Pre-trained Language Models
Figure 4 for Span Fine-tuning for Pre-trained Language Models
Viaarxiv icon

Cross-lingual Transferring of Pre-trained Contextualized Language Models

Add code
Jul 27, 2021
Figure 1 for Cross-lingual Transferring of Pre-trained Contextualized Language Models
Figure 2 for Cross-lingual Transferring of Pre-trained Contextualized Language Models
Figure 3 for Cross-lingual Transferring of Pre-trained Contextualized Language Models
Figure 4 for Cross-lingual Transferring of Pre-trained Contextualized Language Models
Viaarxiv icon

Graph-free Multi-hop Reading Comprehension: A Select-to-Guide Strategy

Add code
Jul 25, 2021
Figure 1 for Graph-free Multi-hop Reading Comprehension: A Select-to-Guide Strategy
Figure 2 for Graph-free Multi-hop Reading Comprehension: A Select-to-Guide Strategy
Figure 3 for Graph-free Multi-hop Reading Comprehension: A Select-to-Guide Strategy
Figure 4 for Graph-free Multi-hop Reading Comprehension: A Select-to-Guide Strategy
Viaarxiv icon

Dialogue-oriented Pre-training

Add code
Jun 01, 2021
Figure 1 for Dialogue-oriented Pre-training
Figure 2 for Dialogue-oriented Pre-training
Figure 3 for Dialogue-oriented Pre-training
Figure 4 for Dialogue-oriented Pre-training
Viaarxiv icon

Defending Pre-trained Language Models from Adversarial Word Substitutions Without Performance Sacrifice

Add code
May 30, 2021
Figure 1 for Defending Pre-trained Language Models from Adversarial Word Substitutions Without Performance Sacrifice
Figure 2 for Defending Pre-trained Language Models from Adversarial Word Substitutions Without Performance Sacrifice
Figure 3 for Defending Pre-trained Language Models from Adversarial Word Substitutions Without Performance Sacrifice
Figure 4 for Defending Pre-trained Language Models from Adversarial Word Substitutions Without Performance Sacrifice
Viaarxiv icon