Picture for Jingfei Du

Jingfei Du

Self-training Improves Pre-training for Natural Language Understanding

Add code
Oct 05, 2020
Figure 1 for Self-training Improves Pre-training for Natural Language Understanding
Figure 2 for Self-training Improves Pre-training for Natural Language Understanding
Figure 3 for Self-training Improves Pre-training for Natural Language Understanding
Figure 4 for Self-training Improves Pre-training for Natural Language Understanding
Viaarxiv icon

Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval

Add code
Sep 27, 2020
Figure 1 for Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval
Figure 2 for Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval
Figure 3 for Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval
Figure 4 for Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval
Viaarxiv icon

General Purpose Text Embeddings from Pre-trained Language Models for Scalable Inference

Add code
Apr 29, 2020
Figure 1 for General Purpose Text Embeddings from Pre-trained Language Models for Scalable Inference
Figure 2 for General Purpose Text Embeddings from Pre-trained Language Models for Scalable Inference
Figure 3 for General Purpose Text Embeddings from Pre-trained Language Models for Scalable Inference
Figure 4 for General Purpose Text Embeddings from Pre-trained Language Models for Scalable Inference
Viaarxiv icon

Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model

Add code
Dec 20, 2019
Figure 1 for Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model
Figure 2 for Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model
Figure 3 for Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model
Figure 4 for Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model
Viaarxiv icon

RoBERTa: A Robustly Optimized BERT Pretraining Approach

Add code
Jul 26, 2019
Figure 1 for RoBERTa: A Robustly Optimized BERT Pretraining Approach
Figure 2 for RoBERTa: A Robustly Optimized BERT Pretraining Approach
Figure 3 for RoBERTa: A Robustly Optimized BERT Pretraining Approach
Figure 4 for RoBERTa: A Robustly Optimized BERT Pretraining Approach
Viaarxiv icon

Knowledge-Augmented Language Model and its Application to Unsupervised Named-Entity Recognition

Add code
Apr 09, 2019
Figure 1 for Knowledge-Augmented Language Model and its Application to Unsupervised Named-Entity Recognition
Figure 2 for Knowledge-Augmented Language Model and its Application to Unsupervised Named-Entity Recognition
Figure 3 for Knowledge-Augmented Language Model and its Application to Unsupervised Named-Entity Recognition
Figure 4 for Knowledge-Augmented Language Model and its Application to Unsupervised Named-Entity Recognition
Viaarxiv icon