Picture for Armand Joulin

Armand Joulin

INRIA - Ecole Normale Superieure

Pruning Convolutional Neural Networks with Self-Supervision

Add code
Jan 10, 2020
Figure 1 for Pruning Convolutional Neural Networks with Self-Supervision
Figure 2 for Pruning Convolutional Neural Networks with Self-Supervision
Figure 3 for Pruning Convolutional Neural Networks with Self-Supervision
Figure 4 for Pruning Convolutional Neural Networks with Self-Supervision
Viaarxiv icon

Libri-Light: A Benchmark for ASR with Limited or No Supervision

Add code
Dec 17, 2019
Figure 1 for Libri-Light: A Benchmark for ASR with Limited or No Supervision
Figure 2 for Libri-Light: A Benchmark for ASR with Limited or No Supervision
Figure 3 for Libri-Light: A Benchmark for ASR with Limited or No Supervision
Figure 4 for Libri-Light: A Benchmark for ASR with Limited or No Supervision
Viaarxiv icon

CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data

Add code
Nov 15, 2019
Figure 1 for CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data
Figure 2 for CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data
Figure 3 for CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data
Figure 4 for CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data
Viaarxiv icon

CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB

Add code
Nov 10, 2019
Figure 1 for CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB
Figure 2 for CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB
Figure 3 for CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB
Figure 4 for CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB
Viaarxiv icon

Updating Pre-trained Word Vectors and Text Classifiers using Monolingual Alignment

Add code
Oct 15, 2019
Figure 1 for Updating Pre-trained Word Vectors and Text Classifiers using Monolingual Alignment
Figure 2 for Updating Pre-trained Word Vectors and Text Classifiers using Monolingual Alignment
Figure 3 for Updating Pre-trained Word Vectors and Text Classifiers using Monolingual Alignment
Viaarxiv icon

Reducing Transformer Depth on Demand with Structured Dropout

Add code
Sep 25, 2019
Figure 1 for Reducing Transformer Depth on Demand with Structured Dropout
Figure 2 for Reducing Transformer Depth on Demand with Structured Dropout
Figure 3 for Reducing Transformer Depth on Demand with Structured Dropout
Figure 4 for Reducing Transformer Depth on Demand with Structured Dropout
Viaarxiv icon

And the Bit Goes Down: Revisiting the Quantization of Neural Networks

Add code
Jul 29, 2019
Figure 1 for And the Bit Goes Down: Revisiting the Quantization of Neural Networks
Figure 2 for And the Bit Goes Down: Revisiting the Quantization of Neural Networks
Figure 3 for And the Bit Goes Down: Revisiting the Quantization of Neural Networks
Figure 4 for And the Bit Goes Down: Revisiting the Quantization of Neural Networks
Viaarxiv icon

Why Build an Assistant in Minecraft?

Add code
Jul 25, 2019
Viaarxiv icon

Augmenting Self-attention with Persistent Memory

Add code
Jul 02, 2019
Figure 1 for Augmenting Self-attention with Persistent Memory
Figure 2 for Augmenting Self-attention with Persistent Memory
Figure 3 for Augmenting Self-attention with Persistent Memory
Figure 4 for Augmenting Self-attention with Persistent Memory
Viaarxiv icon

Adaptive Attention Span in Transformers

Add code
May 19, 2019
Figure 1 for Adaptive Attention Span in Transformers
Figure 2 for Adaptive Attention Span in Transformers
Figure 3 for Adaptive Attention Span in Transformers
Figure 4 for Adaptive Attention Span in Transformers
Viaarxiv icon