Picture for Alexei Baevski

Alexei Baevski

Jack

Effectiveness of self-supervised pre-training for speech recognition

Add code
Nov 10, 2019
Figure 1 for Effectiveness of self-supervised pre-training for speech recognition
Figure 2 for Effectiveness of self-supervised pre-training for speech recognition
Figure 3 for Effectiveness of self-supervised pre-training for speech recognition
Figure 4 for Effectiveness of self-supervised pre-training for speech recognition
Viaarxiv icon

vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations

Add code
Oct 12, 2019
Figure 1 for vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations
Figure 2 for vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations
Figure 3 for vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations
Figure 4 for vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations
Viaarxiv icon

Facebook FAIR's WMT19 News Translation Task Submission

Add code
Jul 15, 2019
Figure 1 for Facebook FAIR's WMT19 News Translation Task Submission
Figure 2 for Facebook FAIR's WMT19 News Translation Task Submission
Figure 3 for Facebook FAIR's WMT19 News Translation Task Submission
Figure 4 for Facebook FAIR's WMT19 News Translation Task Submission
Viaarxiv icon

wav2vec: Unsupervised Pre-training for Speech Recognition

Add code
May 24, 2019
Figure 1 for wav2vec: Unsupervised Pre-training for Speech Recognition
Figure 2 for wav2vec: Unsupervised Pre-training for Speech Recognition
Figure 3 for wav2vec: Unsupervised Pre-training for Speech Recognition
Figure 4 for wav2vec: Unsupervised Pre-training for Speech Recognition
Viaarxiv icon

fairseq: A Fast, Extensible Toolkit for Sequence Modeling

Add code
Apr 01, 2019
Figure 1 for fairseq: A Fast, Extensible Toolkit for Sequence Modeling
Figure 2 for fairseq: A Fast, Extensible Toolkit for Sequence Modeling
Figure 3 for fairseq: A Fast, Extensible Toolkit for Sequence Modeling
Figure 4 for fairseq: A Fast, Extensible Toolkit for Sequence Modeling
Viaarxiv icon

Pre-trained Language Model Representations for Language Generation

Add code
Apr 01, 2019
Figure 1 for Pre-trained Language Model Representations for Language Generation
Figure 2 for Pre-trained Language Model Representations for Language Generation
Figure 3 for Pre-trained Language Model Representations for Language Generation
Figure 4 for Pre-trained Language Model Representations for Language Generation
Viaarxiv icon

Cloze-driven Pretraining of Self-attention Networks

Add code
Mar 19, 2019
Figure 1 for Cloze-driven Pretraining of Self-attention Networks
Figure 2 for Cloze-driven Pretraining of Self-attention Networks
Figure 3 for Cloze-driven Pretraining of Self-attention Networks
Figure 4 for Cloze-driven Pretraining of Self-attention Networks
Viaarxiv icon

Pay Less Attention with Lightweight and Dynamic Convolutions

Add code
Jan 29, 2019
Figure 1 for Pay Less Attention with Lightweight and Dynamic Convolutions
Figure 2 for Pay Less Attention with Lightweight and Dynamic Convolutions
Figure 3 for Pay Less Attention with Lightweight and Dynamic Convolutions
Figure 4 for Pay Less Attention with Lightweight and Dynamic Convolutions
Viaarxiv icon

Adaptive Input Representations for Neural Language Modeling

Add code
Oct 01, 2018
Figure 1 for Adaptive Input Representations for Neural Language Modeling
Figure 2 for Adaptive Input Representations for Neural Language Modeling
Figure 3 for Adaptive Input Representations for Neural Language Modeling
Figure 4 for Adaptive Input Representations for Neural Language Modeling
Viaarxiv icon