Picture for Ronan Collobert

Ronan Collobert

Pseudo-Labeling for Massively Multilingual Speech Recognition

Add code
Oct 30, 2021
Figure 1 for Pseudo-Labeling for Massively Multilingual Speech Recognition
Figure 2 for Pseudo-Labeling for Massively Multilingual Speech Recognition
Figure 3 for Pseudo-Labeling for Massively Multilingual Speech Recognition
Figure 4 for Pseudo-Labeling for Massively Multilingual Speech Recognition
Viaarxiv icon

Word Order Does Not Matter For Speech Recognition

Add code
Oct 18, 2021
Figure 1 for Word Order Does Not Matter For Speech Recognition
Figure 2 for Word Order Does Not Matter For Speech Recognition
Figure 3 for Word Order Does Not Matter For Speech Recognition
Figure 4 for Word Order Does Not Matter For Speech Recognition
Viaarxiv icon

Kaizen: Continuously improving teacher using Exponential Moving Average for semi-supervised speech recognition

Add code
Jun 14, 2021
Figure 1 for Kaizen: Continuously improving teacher using Exponential Moving Average for semi-supervised speech recognition
Figure 2 for Kaizen: Continuously improving teacher using Exponential Moving Average for semi-supervised speech recognition
Figure 3 for Kaizen: Continuously improving teacher using Exponential Moving Average for semi-supervised speech recognition
Figure 4 for Kaizen: Continuously improving teacher using Exponential Moving Average for semi-supervised speech recognition
Viaarxiv icon

CAPE: Encoding Relative Positions with Continuous Augmented Positional Embeddings

Add code
Jun 06, 2021
Figure 1 for CAPE: Encoding Relative Positions with Continuous Augmented Positional Embeddings
Figure 2 for CAPE: Encoding Relative Positions with Continuous Augmented Positional Embeddings
Figure 3 for CAPE: Encoding Relative Positions with Continuous Augmented Positional Embeddings
Figure 4 for CAPE: Encoding Relative Positions with Continuous Augmented Positional Embeddings
Viaarxiv icon

Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised Pre-Training

Add code
Apr 02, 2021
Figure 1 for Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised Pre-Training
Figure 2 for Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised Pre-Training
Figure 3 for Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised Pre-Training
Figure 4 for Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised Pre-Training
Viaarxiv icon

MLS: A Large-Scale Multilingual Dataset for Speech Research

Add code
Dec 19, 2020
Figure 1 for MLS: A Large-Scale Multilingual Dataset for Speech Research
Figure 2 for MLS: A Large-Scale Multilingual Dataset for Speech Research
Figure 3 for MLS: A Large-Scale Multilingual Dataset for Speech Research
Figure 4 for MLS: A Large-Scale Multilingual Dataset for Speech Research
Viaarxiv icon

Joint Masked CPC and CTC Training for ASR

Add code
Oct 30, 2020
Figure 1 for Joint Masked CPC and CTC Training for ASR
Figure 2 for Joint Masked CPC and CTC Training for ASR
Figure 3 for Joint Masked CPC and CTC Training for ASR
Figure 4 for Joint Masked CPC and CTC Training for ASR
Viaarxiv icon

Rethinking Evaluation in ASR: Are Our Models Robust Enough?

Add code
Oct 22, 2020
Figure 1 for Rethinking Evaluation in ASR: Are Our Models Robust Enough?
Figure 2 for Rethinking Evaluation in ASR: Are Our Models Robust Enough?
Figure 3 for Rethinking Evaluation in ASR: Are Our Models Robust Enough?
Figure 4 for Rethinking Evaluation in ASR: Are Our Models Robust Enough?
Viaarxiv icon

slimIPL: Language-Model-Free Iterative Pseudo-Labeling

Add code
Oct 22, 2020
Figure 1 for slimIPL: Language-Model-Free Iterative Pseudo-Labeling
Figure 2 for slimIPL: Language-Model-Free Iterative Pseudo-Labeling
Figure 3 for slimIPL: Language-Model-Free Iterative Pseudo-Labeling
Figure 4 for slimIPL: Language-Model-Free Iterative Pseudo-Labeling
Viaarxiv icon

Self-training and Pre-training are Complementary for Speech Recognition

Add code
Oct 22, 2020
Figure 1 for Self-training and Pre-training are Complementary for Speech Recognition
Figure 2 for Self-training and Pre-training are Complementary for Speech Recognition
Figure 3 for Self-training and Pre-training are Complementary for Speech Recognition
Figure 4 for Self-training and Pre-training are Complementary for Speech Recognition
Viaarxiv icon