Alert button
Picture for Manuel Sam Ribeiro

Manuel Sam Ribeiro

Alert button

Multilingual context-based pronunciation learning for Text-to-Speech

Add code
Bookmark button
Alert button
Jul 31, 2023
Giulia Comini, Manuel Sam Ribeiro, Fan Yang, Heereen Shim, Jaime Lorenzo-Trueba

Figure 1 for Multilingual context-based pronunciation learning for Text-to-Speech
Figure 2 for Multilingual context-based pronunciation learning for Text-to-Speech
Figure 3 for Multilingual context-based pronunciation learning for Text-to-Speech
Figure 4 for Multilingual context-based pronunciation learning for Text-to-Speech
Viaarxiv icon

Comparing normalizing flows and diffusion models for prosody and acoustic modelling in text-to-speech

Add code
Bookmark button
Alert button
Jul 31, 2023
Guangyan Zhang, Thomas Merritt, Manuel Sam Ribeiro, Biel Tura-Vecino, Kayoko Yanagisawa, Kamil Pokora, Abdelhamid Ezzerg, Sebastian Cygert, Ammar Abbas, Piotr Bilinski, Roberto Barra-Chicote, Daniel Korzekwa, Jaime Lorenzo-Trueba

Figure 1 for Comparing normalizing flows and diffusion models for prosody and acoustic modelling in text-to-speech
Figure 2 for Comparing normalizing flows and diffusion models for prosody and acoustic modelling in text-to-speech
Figure 3 for Comparing normalizing flows and diffusion models for prosody and acoustic modelling in text-to-speech
Figure 4 for Comparing normalizing flows and diffusion models for prosody and acoustic modelling in text-to-speech
Viaarxiv icon

Improving grapheme-to-phoneme conversion by learning pronunciations from speech recordings

Add code
Bookmark button
Alert button
Jul 31, 2023
Manuel Sam Ribeiro, Giulia Comini, Jaime Lorenzo-Trueba

Figure 1 for Improving grapheme-to-phoneme conversion by learning pronunciations from speech recordings
Figure 2 for Improving grapheme-to-phoneme conversion by learning pronunciations from speech recordings
Figure 3 for Improving grapheme-to-phoneme conversion by learning pronunciations from speech recordings
Figure 4 for Improving grapheme-to-phoneme conversion by learning pronunciations from speech recordings
Viaarxiv icon

Predicting pairwise preferences between TTS audio stimuli using parallel ratings data and anti-symmetric twin neural networks

Add code
Bookmark button
Alert button
Sep 22, 2022
Cassia Valentini-Botinhao, Manuel Sam Ribeiro, Oliver Watts, Korin Richmond, Gustav Eje Henter

Figure 1 for Predicting pairwise preferences between TTS audio stimuli using parallel ratings data and anti-symmetric twin neural networks
Figure 2 for Predicting pairwise preferences between TTS audio stimuli using parallel ratings data and anti-symmetric twin neural networks
Figure 3 for Predicting pairwise preferences between TTS audio stimuli using parallel ratings data and anti-symmetric twin neural networks
Figure 4 for Predicting pairwise preferences between TTS audio stimuli using parallel ratings data and anti-symmetric twin neural networks
Viaarxiv icon

Low-data? No problem: low-resource, language-agnostic conversational text-to-speech via F0-conditioned data augmentation

Add code
Bookmark button
Alert button
Jul 29, 2022
Giulia Comini, Goeric Huybrechts, Manuel Sam Ribeiro, Adam Gabrys, Jaime Lorenzo-Trueba

Figure 1 for Low-data? No problem: low-resource, language-agnostic conversational text-to-speech via F0-conditioned data augmentation
Figure 2 for Low-data? No problem: low-resource, language-agnostic conversational text-to-speech via F0-conditioned data augmentation
Figure 3 for Low-data? No problem: low-resource, language-agnostic conversational text-to-speech via F0-conditioned data augmentation
Figure 4 for Low-data? No problem: low-resource, language-agnostic conversational text-to-speech via F0-conditioned data augmentation
Viaarxiv icon

Voice Filter: Few-shot text-to-speech speaker adaptation using voice conversion as a post-processing module

Add code
Bookmark button
Alert button
Feb 16, 2022
Adam Gabryś, Goeric Huybrechts, Manuel Sam Ribeiro, Chung-Ming Chien, Julian Roth, Giulia Comini, Roberto Barra-Chicote, Bartek Perz, Jaime Lorenzo-Trueba

Figure 1 for Voice Filter: Few-shot text-to-speech speaker adaptation using voice conversion as a post-processing module
Figure 2 for Voice Filter: Few-shot text-to-speech speaker adaptation using voice conversion as a post-processing module
Figure 3 for Voice Filter: Few-shot text-to-speech speaker adaptation using voice conversion as a post-processing module
Figure 4 for Voice Filter: Few-shot text-to-speech speaker adaptation using voice conversion as a post-processing module
Viaarxiv icon

Cross-speaker style transfer for text-to-speech using data augmentation

Add code
Bookmark button
Alert button
Feb 10, 2022
Manuel Sam Ribeiro, Julian Roth, Giulia Comini, Goeric Huybrechts, Adam Gabrys, Jaime Lorenzo-Trueba

Figure 1 for Cross-speaker style transfer for text-to-speech using data augmentation
Figure 2 for Cross-speaker style transfer for text-to-speech using data augmentation
Figure 3 for Cross-speaker style transfer for text-to-speech using data augmentation
Figure 4 for Cross-speaker style transfer for text-to-speech using data augmentation
Viaarxiv icon

Automatic audiovisual synchronisation for ultrasound tongue imaging

Add code
Bookmark button
Alert button
May 31, 2021
Aciel Eshky, Joanne Cleland, Manuel Sam Ribeiro, Eleanor Sugden, Korin Richmond, Steve Renals

Figure 1 for Automatic audiovisual synchronisation for ultrasound tongue imaging
Figure 2 for Automatic audiovisual synchronisation for ultrasound tongue imaging
Figure 3 for Automatic audiovisual synchronisation for ultrasound tongue imaging
Figure 4 for Automatic audiovisual synchronisation for ultrasound tongue imaging
Viaarxiv icon

Silent versus modal multi-speaker speech recognition from ultrasound and video

Add code
Bookmark button
Alert button
Feb 27, 2021
Manuel Sam Ribeiro, Aciel Eshky, Korin Richmond, Steve Renals

Figure 1 for Silent versus modal multi-speaker speech recognition from ultrasound and video
Figure 2 for Silent versus modal multi-speaker speech recognition from ultrasound and video
Figure 3 for Silent versus modal multi-speaker speech recognition from ultrasound and video
Figure 4 for Silent versus modal multi-speaker speech recognition from ultrasound and video
Viaarxiv icon

Exploiting ultrasound tongue imaging for the automatic detection of speech articulation errors

Add code
Bookmark button
Alert button
Feb 27, 2021
Manuel Sam Ribeiro, Joanne Cleland, Aciel Eshky, Korin Richmond, Steve Renals

Figure 1 for Exploiting ultrasound tongue imaging for the automatic detection of speech articulation errors
Figure 2 for Exploiting ultrasound tongue imaging for the automatic detection of speech articulation errors
Figure 3 for Exploiting ultrasound tongue imaging for the automatic detection of speech articulation errors
Figure 4 for Exploiting ultrasound tongue imaging for the automatic detection of speech articulation errors
Viaarxiv icon