Picture for Dan Berrebbi

Dan Berrebbi

Findings of the 2023 ML-SUPERB Challenge: Pre-Training and Evaluation over More Languages and Beyond

Add code
Oct 09, 2023
Figure 1 for Findings of the 2023 ML-SUPERB Challenge: Pre-Training and Evaluation over More Languages and Beyond
Figure 2 for Findings of the 2023 ML-SUPERB Challenge: Pre-Training and Evaluation over More Languages and Beyond
Figure 3 for Findings of the 2023 ML-SUPERB Challenge: Pre-Training and Evaluation over More Languages and Beyond
Figure 4 for Findings of the 2023 ML-SUPERB Challenge: Pre-Training and Evaluation over More Languages and Beyond
Viaarxiv icon

Reproducing Whisper-Style Training Using an Open-Source Toolkit and Publicly Available Data

Add code
Oct 02, 2023
Figure 1 for Reproducing Whisper-Style Training Using an Open-Source Toolkit and Publicly Available Data
Figure 2 for Reproducing Whisper-Style Training Using an Open-Source Toolkit and Publicly Available Data
Figure 3 for Reproducing Whisper-Style Training Using an Open-Source Toolkit and Publicly Available Data
Figure 4 for Reproducing Whisper-Style Training Using an Open-Source Toolkit and Publicly Available Data
Viaarxiv icon

Joint Prediction and Denoising for Large-scale Multilingual Self-supervised Learning

Add code
Sep 28, 2023
Figure 1 for Joint Prediction and Denoising for Large-scale Multilingual Self-supervised Learning
Figure 2 for Joint Prediction and Denoising for Large-scale Multilingual Self-supervised Learning
Figure 3 for Joint Prediction and Denoising for Large-scale Multilingual Self-supervised Learning
Figure 4 for Joint Prediction and Denoising for Large-scale Multilingual Self-supervised Learning
Viaarxiv icon

ML-SUPERB: Multilingual Speech Universal PERformance Benchmark

Add code
May 18, 2023
Figure 1 for ML-SUPERB: Multilingual Speech Universal PERformance Benchmark
Figure 2 for ML-SUPERB: Multilingual Speech Universal PERformance Benchmark
Figure 3 for ML-SUPERB: Multilingual Speech Universal PERformance Benchmark
Viaarxiv icon

ESPnet-ST-v2: Multipurpose Spoken Language Translation Toolkit

Add code
Apr 11, 2023
Figure 1 for ESPnet-ST-v2: Multipurpose Spoken Language Translation Toolkit
Figure 2 for ESPnet-ST-v2: Multipurpose Spoken Language Translation Toolkit
Figure 3 for ESPnet-ST-v2: Multipurpose Spoken Language Translation Toolkit
Figure 4 for ESPnet-ST-v2: Multipurpose Spoken Language Translation Toolkit
Viaarxiv icon

More Speaking or More Speakers?

Add code
Nov 02, 2022
Figure 1 for More Speaking or More Speakers?
Figure 2 for More Speaking or More Speakers?
Figure 3 for More Speaking or More Speakers?
Figure 4 for More Speaking or More Speakers?
Viaarxiv icon

Avoid Overthinking in Self-Supervised Models for Speech Recognition

Add code
Nov 01, 2022
Figure 1 for Avoid Overthinking in Self-Supervised Models for Speech Recognition
Figure 2 for Avoid Overthinking in Self-Supervised Models for Speech Recognition
Figure 3 for Avoid Overthinking in Self-Supervised Models for Speech Recognition
Figure 4 for Avoid Overthinking in Self-Supervised Models for Speech Recognition
Viaarxiv icon

Continuous Pseudo-Labeling from the Start

Add code
Oct 17, 2022
Figure 1 for Continuous Pseudo-Labeling from the Start
Figure 2 for Continuous Pseudo-Labeling from the Start
Figure 3 for Continuous Pseudo-Labeling from the Start
Figure 4 for Continuous Pseudo-Labeling from the Start
Viaarxiv icon

Combining Spectral and Self-Supervised Features for Low Resource Speech Recognition and Translation

Add code
Apr 18, 2022
Figure 1 for Combining Spectral and Self-Supervised Features for Low Resource Speech Recognition and Translation
Figure 2 for Combining Spectral and Self-Supervised Features for Low Resource Speech Recognition and Translation
Figure 3 for Combining Spectral and Self-Supervised Features for Low Resource Speech Recognition and Translation
Figure 4 for Combining Spectral and Self-Supervised Features for Low Resource Speech Recognition and Translation
Viaarxiv icon

Joint Modeling of Code-Switched and Monolingual ASR via Conditional Factorization

Add code
Nov 29, 2021
Figure 1 for Joint Modeling of Code-Switched and Monolingual ASR via Conditional Factorization
Figure 2 for Joint Modeling of Code-Switched and Monolingual ASR via Conditional Factorization
Figure 3 for Joint Modeling of Code-Switched and Monolingual ASR via Conditional Factorization
Figure 4 for Joint Modeling of Code-Switched and Monolingual ASR via Conditional Factorization
Viaarxiv icon