Picture for Hai Zhao

Hai Zhao

Department of Computer Science and Engineering, Shanghai Jiao Tong University, Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering, Shanghai Jiao Tong University, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University

Smoothing Dialogue States for Open Conversational Machine Reading

Add code
Sep 02, 2021
Figure 1 for Smoothing Dialogue States for Open Conversational Machine Reading
Figure 2 for Smoothing Dialogue States for Open Conversational Machine Reading
Figure 3 for Smoothing Dialogue States for Open Conversational Machine Reading
Figure 4 for Smoothing Dialogue States for Open Conversational Machine Reading
Viaarxiv icon

Unsupervised Open-Domain Question Answering

Add code
Aug 31, 2021
Figure 1 for Unsupervised Open-Domain Question Answering
Figure 2 for Unsupervised Open-Domain Question Answering
Figure 3 for Unsupervised Open-Domain Question Answering
Figure 4 for Unsupervised Open-Domain Question Answering
Viaarxiv icon

Span Fine-tuning for Pre-trained Language Models

Add code
Aug 29, 2021
Figure 1 for Span Fine-tuning for Pre-trained Language Models
Figure 2 for Span Fine-tuning for Pre-trained Language Models
Figure 3 for Span Fine-tuning for Pre-trained Language Models
Figure 4 for Span Fine-tuning for Pre-trained Language Models
Viaarxiv icon

Cross-lingual Transferring of Pre-trained Contextualized Language Models

Add code
Jul 27, 2021
Figure 1 for Cross-lingual Transferring of Pre-trained Contextualized Language Models
Figure 2 for Cross-lingual Transferring of Pre-trained Contextualized Language Models
Figure 3 for Cross-lingual Transferring of Pre-trained Contextualized Language Models
Figure 4 for Cross-lingual Transferring of Pre-trained Contextualized Language Models
Viaarxiv icon

Graph-free Multi-hop Reading Comprehension: A Select-to-Guide Strategy

Add code
Jul 25, 2021
Figure 1 for Graph-free Multi-hop Reading Comprehension: A Select-to-Guide Strategy
Figure 2 for Graph-free Multi-hop Reading Comprehension: A Select-to-Guide Strategy
Figure 3 for Graph-free Multi-hop Reading Comprehension: A Select-to-Guide Strategy
Figure 4 for Graph-free Multi-hop Reading Comprehension: A Select-to-Guide Strategy
Viaarxiv icon

Dialogue-oriented Pre-training

Add code
Jun 01, 2021
Figure 1 for Dialogue-oriented Pre-training
Figure 2 for Dialogue-oriented Pre-training
Figure 3 for Dialogue-oriented Pre-training
Figure 4 for Dialogue-oriented Pre-training
Viaarxiv icon

Defending Pre-trained Language Models from Adversarial Word Substitutions Without Performance Sacrifice

Add code
May 30, 2021
Figure 1 for Defending Pre-trained Language Models from Adversarial Word Substitutions Without Performance Sacrifice
Figure 2 for Defending Pre-trained Language Models from Adversarial Word Substitutions Without Performance Sacrifice
Figure 3 for Defending Pre-trained Language Models from Adversarial Word Substitutions Without Performance Sacrifice
Figure 4 for Defending Pre-trained Language Models from Adversarial Word Substitutions Without Performance Sacrifice
Viaarxiv icon

Pre-training Universal Language Representation

Add code
May 30, 2021
Figure 1 for Pre-training Universal Language Representation
Figure 2 for Pre-training Universal Language Representation
Figure 3 for Pre-training Universal Language Representation
Figure 4 for Pre-training Universal Language Representation
Viaarxiv icon

Grammatical Error Correction as GAN-like Sequence Labeling

Add code
May 29, 2021
Figure 1 for Grammatical Error Correction as GAN-like Sequence Labeling
Figure 2 for Grammatical Error Correction as GAN-like Sequence Labeling
Figure 3 for Grammatical Error Correction as GAN-like Sequence Labeling
Figure 4 for Grammatical Error Correction as GAN-like Sequence Labeling
Viaarxiv icon

Structural Pre-training for Dialogue Comprehension

Add code
May 23, 2021
Figure 1 for Structural Pre-training for Dialogue Comprehension
Figure 2 for Structural Pre-training for Dialogue Comprehension
Figure 3 for Structural Pre-training for Dialogue Comprehension
Figure 4 for Structural Pre-training for Dialogue Comprehension
Viaarxiv icon