Alert button

"speech": models, code, and papers
Alert button

SNP2Vec: Scalable Self-Supervised Pre-Training for Genome-Wide Association Study

Add code
Bookmark button
Alert button
Apr 14, 2022
Samuel Cahyawijaya, Tiezheng Yu, Zihan Liu, Tiffany T. W. Mak, Xiaopu Zhou, Nancy Y. Ip, Pascale Fung

Figure 1 for SNP2Vec: Scalable Self-Supervised Pre-Training for Genome-Wide Association Study
Figure 2 for SNP2Vec: Scalable Self-Supervised Pre-Training for Genome-Wide Association Study
Figure 3 for SNP2Vec: Scalable Self-Supervised Pre-Training for Genome-Wide Association Study
Figure 4 for SNP2Vec: Scalable Self-Supervised Pre-Training for Genome-Wide Association Study
Viaarxiv icon

Human-in-the-Loop for Data Collection: a Multi-Target Counter Narrative Dataset to Fight Online Hate Speech

Add code
Bookmark button
Alert button
Jul 19, 2021
Margherita Fanton, Helena Bonaldi, Serra Sinem Tekiroglu, Marco Guerini

Figure 1 for Human-in-the-Loop for Data Collection: a Multi-Target Counter Narrative Dataset to Fight Online Hate Speech
Figure 2 for Human-in-the-Loop for Data Collection: a Multi-Target Counter Narrative Dataset to Fight Online Hate Speech
Figure 3 for Human-in-the-Loop for Data Collection: a Multi-Target Counter Narrative Dataset to Fight Online Hate Speech
Figure 4 for Human-in-the-Loop for Data Collection: a Multi-Target Counter Narrative Dataset to Fight Online Hate Speech
Viaarxiv icon

Hopeful_Men@LT-EDI-EACL2021: Hope Speech Detection Using Indic Transliteration and Transformers

Add code
Bookmark button
Alert button
Feb 25, 2021
Ishan Sanjeev Upadhyay, Nikhil E, Anshul Wadhawan, Radhika Mamidi

Figure 1 for Hopeful_Men@LT-EDI-EACL2021: Hope Speech Detection Using Indic Transliteration and Transformers
Figure 2 for Hopeful_Men@LT-EDI-EACL2021: Hope Speech Detection Using Indic Transliteration and Transformers
Figure 3 for Hopeful_Men@LT-EDI-EACL2021: Hope Speech Detection Using Indic Transliteration and Transformers
Figure 4 for Hopeful_Men@LT-EDI-EACL2021: Hope Speech Detection Using Indic Transliteration and Transformers
Viaarxiv icon

Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese

Add code
Bookmark button
Alert button
May 26, 2022
Kurt Micallef, Albert Gatt, Marc Tanti, Lonneke van der Plas, Claudia Borg

Figure 1 for Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese
Figure 2 for Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese
Figure 3 for Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese
Figure 4 for Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese
Viaarxiv icon

Speaker independence of neural vocoders and their effect on parametric resynthesis speech enhancement

Add code
Bookmark button
Alert button
Nov 14, 2019
Soumi Maiti, Michael I Mandel

Figure 1 for Speaker independence of neural vocoders and their effect on parametric resynthesis speech enhancement
Figure 2 for Speaker independence of neural vocoders and their effect on parametric resynthesis speech enhancement
Figure 3 for Speaker independence of neural vocoders and their effect on parametric resynthesis speech enhancement
Figure 4 for Speaker independence of neural vocoders and their effect on parametric resynthesis speech enhancement
Viaarxiv icon

Investigating Active-learning-based Training Data Selection for Speech Spoofing Countermeasure

Add code
Bookmark button
Alert button
Mar 28, 2022
Xin Wang, Junich Yamagishi

Figure 1 for Investigating Active-learning-based Training Data Selection for Speech Spoofing Countermeasure
Figure 2 for Investigating Active-learning-based Training Data Selection for Speech Spoofing Countermeasure
Figure 3 for Investigating Active-learning-based Training Data Selection for Speech Spoofing Countermeasure
Figure 4 for Investigating Active-learning-based Training Data Selection for Speech Spoofing Countermeasure
Viaarxiv icon

Speech Emotion Recognition with Multiscale Area Attention and Data Augmentation

Add code
Bookmark button
Alert button
Feb 03, 2021
Mingke Xu, Fan Zhang, Xiaodong Cui, Wei Zhang

Figure 1 for Speech Emotion Recognition with Multiscale Area Attention and Data Augmentation
Figure 2 for Speech Emotion Recognition with Multiscale Area Attention and Data Augmentation
Figure 3 for Speech Emotion Recognition with Multiscale Area Attention and Data Augmentation
Figure 4 for Speech Emotion Recognition with Multiscale Area Attention and Data Augmentation
Viaarxiv icon

Transformer VQ-VAE for Unsupervised Unit Discovery and Speech Synthesis: ZeroSpeech 2020 Challenge

May 24, 2020
Andros Tjandra, Sakriani Sakti, Satoshi Nakamura

Figure 1 for Transformer VQ-VAE for Unsupervised Unit Discovery and Speech Synthesis: ZeroSpeech 2020 Challenge
Figure 2 for Transformer VQ-VAE for Unsupervised Unit Discovery and Speech Synthesis: ZeroSpeech 2020 Challenge
Figure 3 for Transformer VQ-VAE for Unsupervised Unit Discovery and Speech Synthesis: ZeroSpeech 2020 Challenge
Figure 4 for Transformer VQ-VAE for Unsupervised Unit Discovery and Speech Synthesis: ZeroSpeech 2020 Challenge
Viaarxiv icon

The ASRU 2019 Mandarin-English Code-Switching Speech Recognition Challenge: Open Datasets, Tracks, Methods and Results

Jul 12, 2020
Xian Shi, Qiangze Feng, Lei Xie

Figure 1 for The ASRU 2019 Mandarin-English Code-Switching Speech Recognition Challenge: Open Datasets, Tracks, Methods and Results
Figure 2 for The ASRU 2019 Mandarin-English Code-Switching Speech Recognition Challenge: Open Datasets, Tracks, Methods and Results
Figure 3 for The ASRU 2019 Mandarin-English Code-Switching Speech Recognition Challenge: Open Datasets, Tracks, Methods and Results
Figure 4 for The ASRU 2019 Mandarin-English Code-Switching Speech Recognition Challenge: Open Datasets, Tracks, Methods and Results
Viaarxiv icon