Alert button
Picture for Alexei Baevski

Alexei Baevski

Alert button

Unsupervised Cross-lingual Representation Learning for Speech Recognition

Add code
Bookmark button
Alert button
Jun 24, 2020
Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli

Figure 1 for Unsupervised Cross-lingual Representation Learning for Speech Recognition
Figure 2 for Unsupervised Cross-lingual Representation Learning for Speech Recognition
Figure 3 for Unsupervised Cross-lingual Representation Learning for Speech Recognition
Figure 4 for Unsupervised Cross-lingual Representation Learning for Speech Recognition
Viaarxiv icon

wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations

Add code
Bookmark button
Alert button
Jun 20, 2020
Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli

Figure 1 for wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations
Figure 2 for wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations
Figure 3 for wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations
Figure 4 for wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations
Viaarxiv icon

Effectiveness of self-supervised pre-training for speech recognition

Add code
Bookmark button
Alert button
Nov 10, 2019
Alexei Baevski, Michael Auli, Abdelrahman Mohamed

Figure 1 for Effectiveness of self-supervised pre-training for speech recognition
Figure 2 for Effectiveness of self-supervised pre-training for speech recognition
Figure 3 for Effectiveness of self-supervised pre-training for speech recognition
Figure 4 for Effectiveness of self-supervised pre-training for speech recognition
Viaarxiv icon

vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations

Add code
Bookmark button
Alert button
Oct 12, 2019
Alexei Baevski, Steffen Schneider, Michael Auli

Figure 1 for vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations
Figure 2 for vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations
Figure 3 for vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations
Figure 4 for vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations
Viaarxiv icon

Facebook FAIR's WMT19 News Translation Task Submission

Add code
Bookmark button
Alert button
Jul 15, 2019
Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, Sergey Edunov

Figure 1 for Facebook FAIR's WMT19 News Translation Task Submission
Figure 2 for Facebook FAIR's WMT19 News Translation Task Submission
Figure 3 for Facebook FAIR's WMT19 News Translation Task Submission
Figure 4 for Facebook FAIR's WMT19 News Translation Task Submission
Viaarxiv icon

wav2vec: Unsupervised Pre-training for Speech Recognition

Add code
Bookmark button
Alert button
May 24, 2019
Steffen Schneider, Alexei Baevski, Ronan Collobert, Michael Auli

Figure 1 for wav2vec: Unsupervised Pre-training for Speech Recognition
Figure 2 for wav2vec: Unsupervised Pre-training for Speech Recognition
Figure 3 for wav2vec: Unsupervised Pre-training for Speech Recognition
Figure 4 for wav2vec: Unsupervised Pre-training for Speech Recognition
Viaarxiv icon

fairseq: A Fast, Extensible Toolkit for Sequence Modeling

Add code
Bookmark button
Alert button
Apr 01, 2019
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli

Figure 1 for fairseq: A Fast, Extensible Toolkit for Sequence Modeling
Figure 2 for fairseq: A Fast, Extensible Toolkit for Sequence Modeling
Figure 3 for fairseq: A Fast, Extensible Toolkit for Sequence Modeling
Figure 4 for fairseq: A Fast, Extensible Toolkit for Sequence Modeling
Viaarxiv icon

Pre-trained Language Model Representations for Language Generation

Add code
Bookmark button
Alert button
Apr 01, 2019
Sergey Edunov, Alexei Baevski, Michael Auli

Figure 1 for Pre-trained Language Model Representations for Language Generation
Figure 2 for Pre-trained Language Model Representations for Language Generation
Figure 3 for Pre-trained Language Model Representations for Language Generation
Figure 4 for Pre-trained Language Model Representations for Language Generation
Viaarxiv icon

Cloze-driven Pretraining of Self-attention Networks

Add code
Bookmark button
Alert button
Mar 19, 2019
Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, Michael Auli

Figure 1 for Cloze-driven Pretraining of Self-attention Networks
Figure 2 for Cloze-driven Pretraining of Self-attention Networks
Figure 3 for Cloze-driven Pretraining of Self-attention Networks
Figure 4 for Cloze-driven Pretraining of Self-attention Networks
Viaarxiv icon