Picture for Edouard Grave

Edouard Grave

APAM

Depth-Adaptive Transformer

Add code
Oct 22, 2019
Figure 1 for Depth-Adaptive Transformer
Figure 2 for Depth-Adaptive Transformer
Figure 3 for Depth-Adaptive Transformer
Figure 4 for Depth-Adaptive Transformer
Viaarxiv icon

Updating Pre-trained Word Vectors and Text Classifiers using Monolingual Alignment

Add code
Oct 15, 2019
Figure 1 for Updating Pre-trained Word Vectors and Text Classifiers using Monolingual Alignment
Figure 2 for Updating Pre-trained Word Vectors and Text Classifiers using Monolingual Alignment
Figure 3 for Updating Pre-trained Word Vectors and Text Classifiers using Monolingual Alignment
Viaarxiv icon

Reducing Transformer Depth on Demand with Structured Dropout

Add code
Sep 25, 2019
Figure 1 for Reducing Transformer Depth on Demand with Structured Dropout
Figure 2 for Reducing Transformer Depth on Demand with Structured Dropout
Figure 3 for Reducing Transformer Depth on Demand with Structured Dropout
Figure 4 for Reducing Transformer Depth on Demand with Structured Dropout
Viaarxiv icon

Don't Forget the Long Tail! A Comprehensive Analysis of Morphological Generalization in Bilingual Lexicon Induction

Add code
Sep 06, 2019
Figure 1 for Don't Forget the Long Tail! A Comprehensive Analysis of Morphological Generalization in Bilingual Lexicon Induction
Figure 2 for Don't Forget the Long Tail! A Comprehensive Analysis of Morphological Generalization in Bilingual Lexicon Induction
Figure 3 for Don't Forget the Long Tail! A Comprehensive Analysis of Morphological Generalization in Bilingual Lexicon Induction
Figure 4 for Don't Forget the Long Tail! A Comprehensive Analysis of Morphological Generalization in Bilingual Lexicon Induction
Viaarxiv icon

Augmenting Self-attention with Persistent Memory

Add code
Jul 02, 2019
Figure 1 for Augmenting Self-attention with Persistent Memory
Figure 2 for Augmenting Self-attention with Persistent Memory
Figure 3 for Augmenting Self-attention with Persistent Memory
Figure 4 for Augmenting Self-attention with Persistent Memory
Viaarxiv icon

Misspelling Oblivious Word Embeddings

Add code
May 23, 2019
Figure 1 for Misspelling Oblivious Word Embeddings
Figure 2 for Misspelling Oblivious Word Embeddings
Figure 3 for Misspelling Oblivious Word Embeddings
Figure 4 for Misspelling Oblivious Word Embeddings
Viaarxiv icon

Adaptive Attention Span in Transformers

Add code
May 19, 2019
Figure 1 for Adaptive Attention Span in Transformers
Figure 2 for Adaptive Attention Span in Transformers
Figure 3 for Adaptive Attention Span in Transformers
Figure 4 for Adaptive Attention Span in Transformers
Viaarxiv icon

Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling

Add code
Dec 28, 2018
Figure 1 for Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling
Figure 2 for Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling
Figure 3 for Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling
Figure 4 for Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling
Viaarxiv icon

Unsupervised Hyperalignment for Multilingual Word Embeddings

Add code
Nov 02, 2018
Figure 1 for Unsupervised Hyperalignment for Multilingual Word Embeddings
Figure 2 for Unsupervised Hyperalignment for Multilingual Word Embeddings
Figure 3 for Unsupervised Hyperalignment for Multilingual Word Embeddings
Figure 4 for Unsupervised Hyperalignment for Multilingual Word Embeddings
Viaarxiv icon

Lightweight Adaptive Mixture of Neural and N-gram Language Models

Add code
Oct 26, 2018
Figure 1 for Lightweight Adaptive Mixture of Neural and N-gram Language Models
Figure 2 for Lightweight Adaptive Mixture of Neural and N-gram Language Models
Figure 3 for Lightweight Adaptive Mixture of Neural and N-gram Language Models
Figure 4 for Lightweight Adaptive Mixture of Neural and N-gram Language Models
Viaarxiv icon