Alert button

"speech": models, code, and papers
Alert button

Spatially Selective Deep Non-linear Filters for Speaker Extraction

Nov 04, 2022
Kristina Tesch, Timo Gerkmann

Figure 1 for Spatially Selective Deep Non-linear Filters for Speaker Extraction
Figure 2 for Spatially Selective Deep Non-linear Filters for Speaker Extraction
Figure 3 for Spatially Selective Deep Non-linear Filters for Speaker Extraction
Figure 4 for Spatially Selective Deep Non-linear Filters for Speaker Extraction
Viaarxiv icon

The IWSLT 2021 BUT Speech Translation Systems

Jul 13, 2021
Hari Krishna Vydana, Martin Karafi'at, Luk'as Burget, "Honza" Cernock'y

Figure 1 for The IWSLT 2021 BUT Speech Translation Systems
Figure 2 for The IWSLT 2021 BUT Speech Translation Systems
Figure 3 for The IWSLT 2021 BUT Speech Translation Systems
Figure 4 for The IWSLT 2021 BUT Speech Translation Systems
Viaarxiv icon

NWPU-ASLP System for the VoicePrivacy 2022 Challenge

Sep 24, 2022
Jixun Yao, Qing Wang, Li Zhang, Pengcheng Guo, Yuhao Liang, Lei Xie

Figure 1 for NWPU-ASLP System for the VoicePrivacy 2022 Challenge
Figure 2 for NWPU-ASLP System for the VoicePrivacy 2022 Challenge
Figure 3 for NWPU-ASLP System for the VoicePrivacy 2022 Challenge
Figure 4 for NWPU-ASLP System for the VoicePrivacy 2022 Challenge
Viaarxiv icon

Where to Pay Attention in Sparse Training for Feature Selection?

Nov 26, 2022
Ghada Sokar, Zahra Atashgahi, Mykola Pechenizkiy, Decebal Constantin Mocanu

Figure 1 for Where to Pay Attention in Sparse Training for Feature Selection?
Figure 2 for Where to Pay Attention in Sparse Training for Feature Selection?
Figure 3 for Where to Pay Attention in Sparse Training for Feature Selection?
Figure 4 for Where to Pay Attention in Sparse Training for Feature Selection?
Viaarxiv icon

Investigation of Deep Neural Network Acoustic Modelling Approaches for Low Resource Accented Mandarin Speech Recognition

Jan 24, 2022
Xurong Xie, Xiang Sui, Xunying Liu, Lan Wang

Viaarxiv icon

Fusing ASR Outputs in Joint Training for Speech Emotion Recognition

Oct 29, 2021
Yuanchao Li, Peter Bell, Catherine Lai

Figure 1 for Fusing ASR Outputs in Joint Training for Speech Emotion Recognition
Figure 2 for Fusing ASR Outputs in Joint Training for Speech Emotion Recognition
Figure 3 for Fusing ASR Outputs in Joint Training for Speech Emotion Recognition
Figure 4 for Fusing ASR Outputs in Joint Training for Speech Emotion Recognition
Viaarxiv icon

An Objective Evaluation Framework for Pathological Speech Synthesis

Jul 01, 2021
Bence Mark Halpern, Julian Fritsch, Enno Hermann, Rob van Son, Odette Scharenborg, Mathew Magimai. -Doss

Figure 1 for An Objective Evaluation Framework for Pathological Speech Synthesis
Figure 2 for An Objective Evaluation Framework for Pathological Speech Synthesis
Figure 3 for An Objective Evaluation Framework for Pathological Speech Synthesis
Figure 4 for An Objective Evaluation Framework for Pathological Speech Synthesis
Viaarxiv icon

Pseudo-Labeling for Massively Multilingual Speech Recognition

Oct 30, 2021
Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, Ronan Collobert

Figure 1 for Pseudo-Labeling for Massively Multilingual Speech Recognition
Figure 2 for Pseudo-Labeling for Massively Multilingual Speech Recognition
Figure 3 for Pseudo-Labeling for Massively Multilingual Speech Recognition
Figure 4 for Pseudo-Labeling for Massively Multilingual Speech Recognition
Viaarxiv icon

Creating a morphological and syntactic tagged corpus for the Uzbek language

Oct 27, 2022
Maksud Sharipov, Jamolbek Mattiev, Jasur Sobirov, Rustam Baltayev

Figure 1 for Creating a morphological and syntactic tagged corpus for the Uzbek language
Figure 2 for Creating a morphological and syntactic tagged corpus for the Uzbek language
Figure 3 for Creating a morphological and syntactic tagged corpus for the Uzbek language
Figure 4 for Creating a morphological and syntactic tagged corpus for the Uzbek language
Viaarxiv icon

ESPnet-ONNX: Bridging a Gap Between Research and Production

Sep 20, 2022
Masao Someki, Yosuke Higuchi, Tomoki Hayashi, Shinji Watanabe

Figure 1 for ESPnet-ONNX: Bridging a Gap Between Research and Production
Figure 2 for ESPnet-ONNX: Bridging a Gap Between Research and Production
Figure 3 for ESPnet-ONNX: Bridging a Gap Between Research and Production
Figure 4 for ESPnet-ONNX: Bridging a Gap Between Research and Production
Viaarxiv icon