Alert button
Picture for Abdelrahman Mohamed

Abdelrahman Mohamed

Alert button

Speech Resynthesis from Discrete Disentangled Self-Supervised Representations

Apr 01, 2021
Adam Polyak, Yossi Adi, Jade Copet, Eugene Kharitonov, Kushal Lakhotia, Wei-Ning Hsu, Abdelrahman Mohamed, Emmanuel Dupoux

Figure 1 for Speech Resynthesis from Discrete Disentangled Self-Supervised Representations
Figure 2 for Speech Resynthesis from Discrete Disentangled Self-Supervised Representations
Figure 3 for Speech Resynthesis from Discrete Disentangled Self-Supervised Representations
Figure 4 for Speech Resynthesis from Discrete Disentangled Self-Supervised Representations
Viaarxiv icon

Contrastive Semi-supervised Learning for ASR

Mar 09, 2021
Alex Xiao, Christian Fuegen, Abdelrahman Mohamed

Figure 1 for Contrastive Semi-supervised Learning for ASR
Figure 2 for Contrastive Semi-supervised Learning for ASR
Figure 3 for Contrastive Semi-supervised Learning for ASR
Figure 4 for Contrastive Semi-supervised Learning for ASR
Viaarxiv icon

Unsupervised Cross-lingual Representation Learning for Speech Recognition

Jun 24, 2020
Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli

Figure 1 for Unsupervised Cross-lingual Representation Learning for Speech Recognition
Figure 2 for Unsupervised Cross-lingual Representation Learning for Speech Recognition
Figure 3 for Unsupervised Cross-lingual Representation Learning for Speech Recognition
Figure 4 for Unsupervised Cross-lingual Representation Learning for Speech Recognition
Viaarxiv icon

wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations

Jun 20, 2020
Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli

Figure 1 for wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations
Figure 2 for wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations
Figure 3 for wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations
Figure 4 for wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations
Viaarxiv icon

Large scale weakly and semi-supervised learning for low-resource video ASR

May 16, 2020
Kritika Singh, Vimal Manohar, Alex Xiao, Sergey Edunov, Ross Girshick, Vitaliy Liptchinsky, Christian Fuegen, Yatharth Saraf, Geoffrey Zweig, Abdelrahman Mohamed

Figure 1 for Large scale weakly and semi-supervised learning for low-resource video ASR
Figure 2 for Large scale weakly and semi-supervised learning for low-resource video ASR
Figure 3 for Large scale weakly and semi-supervised learning for low-resource video ASR
Viaarxiv icon

Libri-Light: A Benchmark for ASR with Limited or No Supervision

Dec 17, 2019
Jacob Kahn, Morgane Rivière, Weiyi Zheng, Evgeny Kharitonov, Qiantong Xu, Pierre-Emmanuel Mazaré, Julien Karadayi, Vitaliy Liptchinsky, Ronan Collobert, Christian Fuegen, Tatiana Likhomanenko, Gabriel Synnaeve, Armand Joulin, Abdelrahman Mohamed, Emmanuel Dupoux

Figure 1 for Libri-Light: A Benchmark for ASR with Limited or No Supervision
Figure 2 for Libri-Light: A Benchmark for ASR with Limited or No Supervision
Figure 3 for Libri-Light: A Benchmark for ASR with Limited or No Supervision
Figure 4 for Libri-Light: A Benchmark for ASR with Limited or No Supervision
Viaarxiv icon

Effectiveness of self-supervised pre-training for speech recognition

Nov 10, 2019
Alexei Baevski, Michael Auli, Abdelrahman Mohamed

Figure 1 for Effectiveness of self-supervised pre-training for speech recognition
Figure 2 for Effectiveness of self-supervised pre-training for speech recognition
Figure 3 for Effectiveness of self-supervised pre-training for speech recognition
Figure 4 for Effectiveness of self-supervised pre-training for speech recognition
Viaarxiv icon

Enforcing Encoder-Decoder Modularity in Sequence-to-Sequence Models

Nov 09, 2019
Siddharth Dalmia, Abdelrahman Mohamed, Mike Lewis, Florian Metze, Luke Zettlemoyer

Figure 1 for Enforcing Encoder-Decoder Modularity in Sequence-to-Sequence Models
Figure 2 for Enforcing Encoder-Decoder Modularity in Sequence-to-Sequence Models
Figure 3 for Enforcing Encoder-Decoder Modularity in Sequence-to-Sequence Models
Figure 4 for Enforcing Encoder-Decoder Modularity in Sequence-to-Sequence Models
Viaarxiv icon

BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

Oct 29, 2019
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer

Figure 1 for BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Figure 2 for BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Figure 3 for BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Figure 4 for BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Viaarxiv icon

Training ASR models by Generation of Contextual Information

Oct 27, 2019
Kritika Singh, Dmytro Okhonko, Jun Liu, Yongqiang Wang, Frank Zhang, Ross Girshick, Sergey Edunov, Fuchun Peng, Yatharth Saraf, Geoffrey Zweig, Abdelrahman Mohamed

Figure 1 for Training ASR models by Generation of Contextual Information
Figure 2 for Training ASR models by Generation of Contextual Information
Figure 3 for Training ASR models by Generation of Contextual Information
Figure 4 for Training ASR models by Generation of Contextual Information
Viaarxiv icon