Picture for Edouard Grave

Edouard Grave

APAM

Distilling Knowledge from Reader to Retriever for Question Answering

Add code
Dec 08, 2020
Figure 1 for Distilling Knowledge from Reader to Retriever for Question Answering
Figure 2 for Distilling Knowledge from Reader to Retriever for Question Answering
Figure 3 for Distilling Knowledge from Reader to Retriever for Question Answering
Figure 4 for Distilling Knowledge from Reader to Retriever for Question Answering
Viaarxiv icon

Beyond English-Centric Multilingual Machine Translation

Add code
Oct 21, 2020
Figure 1 for Beyond English-Centric Multilingual Machine Translation
Figure 2 for Beyond English-Centric Multilingual Machine Translation
Figure 3 for Beyond English-Centric Multilingual Machine Translation
Figure 4 for Beyond English-Centric Multilingual Machine Translation
Viaarxiv icon

Self-training Improves Pre-training for Natural Language Understanding

Add code
Oct 05, 2020
Figure 1 for Self-training Improves Pre-training for Natural Language Understanding
Figure 2 for Self-training Improves Pre-training for Natural Language Understanding
Figure 3 for Self-training Improves Pre-training for Natural Language Understanding
Figure 4 for Self-training Improves Pre-training for Natural Language Understanding
Viaarxiv icon

Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering

Add code
Jul 02, 2020
Figure 1 for Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering
Figure 2 for Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering
Figure 3 for Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering
Viaarxiv icon

Training with Quantization Noise for Extreme Model Compression

Add code
Apr 17, 2020
Figure 1 for Training with Quantization Noise for Extreme Model Compression
Figure 2 for Training with Quantization Noise for Extreme Model Compression
Figure 3 for Training with Quantization Noise for Extreme Model Compression
Figure 4 for Training with Quantization Noise for Extreme Model Compression
Viaarxiv icon

Accessing Higher-level Representations in Sequential Transformers with Feedback Memory

Add code
Mar 09, 2020
Figure 1 for Accessing Higher-level Representations in Sequential Transformers with Feedback Memory
Figure 2 for Accessing Higher-level Representations in Sequential Transformers with Feedback Memory
Figure 3 for Accessing Higher-level Representations in Sequential Transformers with Feedback Memory
Figure 4 for Accessing Higher-level Representations in Sequential Transformers with Feedback Memory
Viaarxiv icon

End-to-end ASR: from Supervised to Semi-Supervised Learning with Modern Architectures

Add code
Nov 19, 2019
Figure 1 for End-to-end ASR: from Supervised to Semi-Supervised Learning with Modern Architectures
Figure 2 for End-to-end ASR: from Supervised to Semi-Supervised Learning with Modern Architectures
Viaarxiv icon

CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data

Add code
Nov 15, 2019
Figure 1 for CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data
Figure 2 for CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data
Figure 3 for CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data
Figure 4 for CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data
Viaarxiv icon

CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB

Add code
Nov 10, 2019
Figure 1 for CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB
Figure 2 for CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB
Figure 3 for CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB
Figure 4 for CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB
Viaarxiv icon

Unsupervised Cross-lingual Representation Learning at Scale

Add code
Nov 05, 2019
Figure 1 for Unsupervised Cross-lingual Representation Learning at Scale
Figure 2 for Unsupervised Cross-lingual Representation Learning at Scale
Figure 3 for Unsupervised Cross-lingual Representation Learning at Scale
Figure 4 for Unsupervised Cross-lingual Representation Learning at Scale
Viaarxiv icon