Alert button
Picture for Ronan Collobert

Ronan Collobert

Alert button

Kaizen: Continuously improving teacher using Exponential Moving Average for semi-supervised speech recognition

Add code
Bookmark button
Alert button
Jun 14, 2021
Vimal Manohar, Tatiana Likhomanenko, Qiantong Xu, Wei-Ning Hsu, Ronan Collobert, Yatharth Saraf, Geoffrey Zweig, Abdelrahman Mohamed

Figure 1 for Kaizen: Continuously improving teacher using Exponential Moving Average for semi-supervised speech recognition
Figure 2 for Kaizen: Continuously improving teacher using Exponential Moving Average for semi-supervised speech recognition
Figure 3 for Kaizen: Continuously improving teacher using Exponential Moving Average for semi-supervised speech recognition
Figure 4 for Kaizen: Continuously improving teacher using Exponential Moving Average for semi-supervised speech recognition
Viaarxiv icon

CAPE: Encoding Relative Positions with Continuous Augmented Positional Embeddings

Add code
Bookmark button
Alert button
Jun 06, 2021
Tatiana Likhomanenko, Qiantong Xu, Ronan Collobert, Gabriel Synnaeve, Alex Rogozhnikov

Figure 1 for CAPE: Encoding Relative Positions with Continuous Augmented Positional Embeddings
Figure 2 for CAPE: Encoding Relative Positions with Continuous Augmented Positional Embeddings
Figure 3 for CAPE: Encoding Relative Positions with Continuous Augmented Positional Embeddings
Figure 4 for CAPE: Encoding Relative Positions with Continuous Augmented Positional Embeddings
Viaarxiv icon

Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised Pre-Training

Add code
Bookmark button
Alert button
Apr 02, 2021
Wei-Ning Hsu, Anuroop Sriram, Alexei Baevski, Tatiana Likhomanenko, Qiantong Xu, Vineel Pratap, Jacob Kahn, Ann Lee, Ronan Collobert, Gabriel Synnaeve, Michael Auli

Figure 1 for Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised Pre-Training
Figure 2 for Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised Pre-Training
Figure 3 for Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised Pre-Training
Figure 4 for Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised Pre-Training
Viaarxiv icon

MLS: A Large-Scale Multilingual Dataset for Speech Research

Add code
Bookmark button
Alert button
Dec 19, 2020
Vineel Pratap, Qiantong Xu, Anuroop Sriram, Gabriel Synnaeve, Ronan Collobert

Figure 1 for MLS: A Large-Scale Multilingual Dataset for Speech Research
Figure 2 for MLS: A Large-Scale Multilingual Dataset for Speech Research
Figure 3 for MLS: A Large-Scale Multilingual Dataset for Speech Research
Figure 4 for MLS: A Large-Scale Multilingual Dataset for Speech Research
Viaarxiv icon

Joint Masked CPC and CTC Training for ASR

Add code
Bookmark button
Alert button
Oct 30, 2020
Chaitanya Talnikar, Tatiana Likhomanenko, Ronan Collobert, Gabriel Synnaeve

Figure 1 for Joint Masked CPC and CTC Training for ASR
Figure 2 for Joint Masked CPC and CTC Training for ASR
Figure 3 for Joint Masked CPC and CTC Training for ASR
Figure 4 for Joint Masked CPC and CTC Training for ASR
Viaarxiv icon

Rethinking Evaluation in ASR: Are Our Models Robust Enough?

Add code
Bookmark button
Alert button
Oct 22, 2020
Tatiana Likhomanenko, Qiantong Xu, Vineel Pratap, Paden Tomasello, Jacob Kahn, Gilad Avidov, Ronan Collobert, Gabriel Synnaeve

Figure 1 for Rethinking Evaluation in ASR: Are Our Models Robust Enough?
Figure 2 for Rethinking Evaluation in ASR: Are Our Models Robust Enough?
Figure 3 for Rethinking Evaluation in ASR: Are Our Models Robust Enough?
Figure 4 for Rethinking Evaluation in ASR: Are Our Models Robust Enough?
Viaarxiv icon

slimIPL: Language-Model-Free Iterative Pseudo-Labeling

Add code
Bookmark button
Alert button
Oct 22, 2020
Tatiana Likhomanenko, Qiantong Xu, Jacob Kahn, Gabriel Synnaeve, Ronan Collobert

Figure 1 for slimIPL: Language-Model-Free Iterative Pseudo-Labeling
Figure 2 for slimIPL: Language-Model-Free Iterative Pseudo-Labeling
Figure 3 for slimIPL: Language-Model-Free Iterative Pseudo-Labeling
Figure 4 for slimIPL: Language-Model-Free Iterative Pseudo-Labeling
Viaarxiv icon

Self-training and Pre-training are Complementary for Speech Recognition

Add code
Bookmark button
Alert button
Oct 22, 2020
Qiantong Xu, Alexei Baevski, Tatiana Likhomanenko, Paden Tomasello, Alexis Conneau, Ronan Collobert, Gabriel Synnaeve, Michael Auli

Figure 1 for Self-training and Pre-training are Complementary for Speech Recognition
Figure 2 for Self-training and Pre-training are Complementary for Speech Recognition
Figure 3 for Self-training and Pre-training are Complementary for Speech Recognition
Figure 4 for Self-training and Pre-training are Complementary for Speech Recognition
Viaarxiv icon

Massively Multilingual ASR: 50 Languages, 1 Model, 1 Billion Parameters

Add code
Bookmark button
Alert button
Jul 08, 2020
Vineel Pratap, Anuroop Sriram, Paden Tomasello, Awni Hannun, Vitaliy Liptchinsky, Gabriel Synnaeve, Ronan Collobert

Figure 1 for Massively Multilingual ASR: 50 Languages, 1 Model, 1 Billion Parameters
Figure 2 for Massively Multilingual ASR: 50 Languages, 1 Model, 1 Billion Parameters
Figure 3 for Massively Multilingual ASR: 50 Languages, 1 Model, 1 Billion Parameters
Figure 4 for Massively Multilingual ASR: 50 Languages, 1 Model, 1 Billion Parameters
Viaarxiv icon