Alert button
Picture for Dan Berrebbi

Dan Berrebbi

Alert button

Findings of the 2023 ML-SUPERB Challenge: Pre-Training and Evaluation over More Languages and Beyond

Add code
Bookmark button
Alert button
Oct 09, 2023
Jiatong Shi, William Chen, Dan Berrebbi, Hsiu-Hsuan Wang, Wei-Ping Huang, En-Pei Hu, Ho-Lam Chuang, Xuankai Chang, Yuxun Tang, Shang-Wen Li, Abdelrahman Mohamed, Hung-yi Lee, Shinji Watanabe

Figure 1 for Findings of the 2023 ML-SUPERB Challenge: Pre-Training and Evaluation over More Languages and Beyond
Figure 2 for Findings of the 2023 ML-SUPERB Challenge: Pre-Training and Evaluation over More Languages and Beyond
Figure 3 for Findings of the 2023 ML-SUPERB Challenge: Pre-Training and Evaluation over More Languages and Beyond
Figure 4 for Findings of the 2023 ML-SUPERB Challenge: Pre-Training and Evaluation over More Languages and Beyond
Viaarxiv icon

Reproducing Whisper-Style Training Using an Open-Source Toolkit and Publicly Available Data

Add code
Bookmark button
Alert button
Oct 02, 2023
Yifan Peng, Jinchuan Tian, Brian Yan, Dan Berrebbi, Xuankai Chang, Xinjian Li, Jiatong Shi, Siddhant Arora, William Chen, Roshan Sharma, Wangyou Zhang, Yui Sudo, Muhammad Shakeel, Jee-weon Jung, Soumi Maiti, Shinji Watanabe

Figure 1 for Reproducing Whisper-Style Training Using an Open-Source Toolkit and Publicly Available Data
Figure 2 for Reproducing Whisper-Style Training Using an Open-Source Toolkit and Publicly Available Data
Figure 3 for Reproducing Whisper-Style Training Using an Open-Source Toolkit and Publicly Available Data
Figure 4 for Reproducing Whisper-Style Training Using an Open-Source Toolkit and Publicly Available Data
Viaarxiv icon

Joint Prediction and Denoising for Large-scale Multilingual Self-supervised Learning

Add code
Bookmark button
Alert button
Sep 28, 2023
William Chen, Jiatong Shi, Brian Yan, Dan Berrebbi, Wangyou Zhang, Yifan Peng, Xuankai Chang, Soumi Maiti, Shinji Watanabe

Figure 1 for Joint Prediction and Denoising for Large-scale Multilingual Self-supervised Learning
Figure 2 for Joint Prediction and Denoising for Large-scale Multilingual Self-supervised Learning
Figure 3 for Joint Prediction and Denoising for Large-scale Multilingual Self-supervised Learning
Figure 4 for Joint Prediction and Denoising for Large-scale Multilingual Self-supervised Learning
Viaarxiv icon

ML-SUPERB: Multilingual Speech Universal PERformance Benchmark

Add code
Bookmark button
Alert button
May 18, 2023
Jiatong Shi, Dan Berrebbi, William Chen, Ho-Lam Chung, En-Pei Hu, Wei Ping Huang, Xuankai Chang, Shang-Wen Li, Abdelrahman Mohamed, Hung-yi Lee, Shinji Watanabe

Figure 1 for ML-SUPERB: Multilingual Speech Universal PERformance Benchmark
Figure 2 for ML-SUPERB: Multilingual Speech Universal PERformance Benchmark
Figure 3 for ML-SUPERB: Multilingual Speech Universal PERformance Benchmark
Viaarxiv icon

ESPnet-ST-v2: Multipurpose Spoken Language Translation Toolkit

Add code
Bookmark button
Alert button
Apr 11, 2023
Brian Yan, Jiatong Shi, Yun Tang, Hirofumi Inaguma, Yifan Peng, Siddharth Dalmia, Peter Polák, Patrick Fernandes, Dan Berrebbi, Tomoki Hayashi, Xiaohui Zhang, Zhaoheng Ni, Moto Hira, Soumi Maiti, Juan Pino, Shinji Watanabe

Figure 1 for ESPnet-ST-v2: Multipurpose Spoken Language Translation Toolkit
Figure 2 for ESPnet-ST-v2: Multipurpose Spoken Language Translation Toolkit
Figure 3 for ESPnet-ST-v2: Multipurpose Spoken Language Translation Toolkit
Figure 4 for ESPnet-ST-v2: Multipurpose Spoken Language Translation Toolkit
Viaarxiv icon

More Speaking or More Speakers?

Add code
Bookmark button
Alert button
Nov 02, 2022
Dan Berrebbi, Ronan Collobert, Navdeep Jaitly, Tatiana Likhomanenko

Figure 1 for More Speaking or More Speakers?
Figure 2 for More Speaking or More Speakers?
Figure 3 for More Speaking or More Speakers?
Figure 4 for More Speaking or More Speakers?
Viaarxiv icon

Avoid Overthinking in Self-Supervised Models for Speech Recognition

Add code
Bookmark button
Alert button
Nov 01, 2022
Dan Berrebbi, Brian Yan, Shinji Watanabe

Figure 1 for Avoid Overthinking in Self-Supervised Models for Speech Recognition
Figure 2 for Avoid Overthinking in Self-Supervised Models for Speech Recognition
Figure 3 for Avoid Overthinking in Self-Supervised Models for Speech Recognition
Figure 4 for Avoid Overthinking in Self-Supervised Models for Speech Recognition
Viaarxiv icon

Continuous Pseudo-Labeling from the Start

Add code
Bookmark button
Alert button
Oct 17, 2022
Dan Berrebbi, Ronan Collobert, Samy Bengio, Navdeep Jaitly, Tatiana Likhomanenko

Figure 1 for Continuous Pseudo-Labeling from the Start
Figure 2 for Continuous Pseudo-Labeling from the Start
Figure 3 for Continuous Pseudo-Labeling from the Start
Figure 4 for Continuous Pseudo-Labeling from the Start
Viaarxiv icon

Combining Spectral and Self-Supervised Features for Low Resource Speech Recognition and Translation

Add code
Bookmark button
Alert button
Apr 18, 2022
Dan Berrebbi, Jiatong Shi, Brian Yan, Osbel Lopez-Francisco, Jonathan D. Amith, Shinji Watanabe

Figure 1 for Combining Spectral and Self-Supervised Features for Low Resource Speech Recognition and Translation
Figure 2 for Combining Spectral and Self-Supervised Features for Low Resource Speech Recognition and Translation
Figure 3 for Combining Spectral and Self-Supervised Features for Low Resource Speech Recognition and Translation
Figure 4 for Combining Spectral and Self-Supervised Features for Low Resource Speech Recognition and Translation
Viaarxiv icon