Alert button
Picture for Daniel Korzekwa

Daniel Korzekwa

Alert button

Non-Autoregressive TTS with Explicit Duration Modelling for Low-Resource Highly Expressive Speech

Add code
Bookmark button
Alert button
Jun 24, 2021
Raahil Shah, Kamil Pokora, Abdelhamid Ezzerg, Viacheslav Klimkov, Goeric Huybrechts, Bartosz Putrycz, Daniel Korzekwa, Thomas Merritt

Figure 1 for Non-Autoregressive TTS with Explicit Duration Modelling for Low-Resource Highly Expressive Speech
Figure 2 for Non-Autoregressive TTS with Explicit Duration Modelling for Low-Resource Highly Expressive Speech
Figure 3 for Non-Autoregressive TTS with Explicit Duration Modelling for Low-Resource Highly Expressive Speech
Figure 4 for Non-Autoregressive TTS with Explicit Duration Modelling for Low-Resource Highly Expressive Speech
Viaarxiv icon

Improving the expressiveness of neural vocoding with non-affine Normalizing Flows

Add code
Bookmark button
Alert button
Jun 16, 2021
Adam Gabryś, Yunlong Jiao, Viacheslav Klimkov, Daniel Korzekwa, Roberto Barra-Chicote

Figure 1 for Improving the expressiveness of neural vocoding with non-affine Normalizing Flows
Figure 2 for Improving the expressiveness of neural vocoding with non-affine Normalizing Flows
Figure 3 for Improving the expressiveness of neural vocoding with non-affine Normalizing Flows
Viaarxiv icon

Weakly-supervised word-level pronunciation error detection in non-native English speech

Add code
Bookmark button
Alert button
Jun 07, 2021
Daniel Korzekwa, Jaime Lorenzo-Trueba, Thomas Drugman, Shira Calamaro, Bozena Kostek

Figure 1 for Weakly-supervised word-level pronunciation error detection in non-native English speech
Figure 2 for Weakly-supervised word-level pronunciation error detection in non-native English speech
Figure 3 for Weakly-supervised word-level pronunciation error detection in non-native English speech
Figure 4 for Weakly-supervised word-level pronunciation error detection in non-native English speech
Viaarxiv icon

Universal Neural Vocoding with Parallel WaveNet

Add code
Bookmark button
Alert button
Feb 15, 2021
Yunlong Jiao, Adam Gabrys, Georgi Tinchev, Bartosz Putrycz, Daniel Korzekwa, Viacheslav Klimkov

Figure 1 for Universal Neural Vocoding with Parallel WaveNet
Figure 2 for Universal Neural Vocoding with Parallel WaveNet
Figure 3 for Universal Neural Vocoding with Parallel WaveNet
Figure 4 for Universal Neural Vocoding with Parallel WaveNet
Viaarxiv icon

Mispronunciation Detection in Non-native (L2) English with Uncertainty Modeling

Add code
Bookmark button
Alert button
Feb 08, 2021
Daniel Korzekwa, Jaime Lorenzo-Trueba, Szymon Zaporowski, Shira Calamaro, Thomas Drugman, Bozena Kostek

Figure 1 for Mispronunciation Detection in Non-native (L2) English with Uncertainty Modeling
Figure 2 for Mispronunciation Detection in Non-native (L2) English with Uncertainty Modeling
Figure 3 for Mispronunciation Detection in Non-native (L2) English with Uncertainty Modeling
Figure 4 for Mispronunciation Detection in Non-native (L2) English with Uncertainty Modeling
Viaarxiv icon

Detection of Lexical Stress Errors in Non-native (L2) English with Data Augmentation and Attention

Add code
Bookmark button
Alert button
Dec 29, 2020
Daniel Korzekwa, Roberto Barra-Chicote, Szymon Zaporowski, Grzegorz Beringer, Jaime Lorenzo-Trueba, Alicja Serafinowicz, Jasha Droppo, Thomas Drugman, Bozena Kostek

Figure 1 for Detection of Lexical Stress Errors in Non-native (L2) English with Data Augmentation and Attention
Figure 2 for Detection of Lexical Stress Errors in Non-native (L2) English with Data Augmentation and Attention
Figure 3 for Detection of Lexical Stress Errors in Non-native (L2) English with Data Augmentation and Attention
Figure 4 for Detection of Lexical Stress Errors in Non-native (L2) English with Data Augmentation and Attention
Viaarxiv icon

Interpretable Deep Learning Model for the Detection and Reconstruction of Dysarthric Speech

Add code
Bookmark button
Alert button
Jul 10, 2019
Daniel Korzekwa, Roberto Barra-Chicote, Bozena Kostek, Thomas Drugman, Mateusz Lajszczak

Figure 1 for Interpretable Deep Learning Model for the Detection and Reconstruction of Dysarthric Speech
Figure 2 for Interpretable Deep Learning Model for the Detection and Reconstruction of Dysarthric Speech
Figure 3 for Interpretable Deep Learning Model for the Detection and Reconstruction of Dysarthric Speech
Figure 4 for Interpretable Deep Learning Model for the Detection and Reconstruction of Dysarthric Speech
Viaarxiv icon