Alert button
Picture for Abdelhamid Ezzerg

Abdelhamid Ezzerg

Alert button

AE-Flow: AutoEncoder Normalizing Flow

Dec 27, 2023
Jakub Mosiński, Piotr Biliński, Thomas Merritt, Abdelhamid Ezzerg, Daniel Korzekwa

Viaarxiv icon

Creating New Voices using Normalizing Flows

Dec 22, 2023
Piotr Bilinski, Thomas Merritt, Abdelhamid Ezzerg, Kamil Pokora, Sebastian Cygert, Kayoko Yanagisawa, Roberto Barra-Chicote, Daniel Korzekwa

Viaarxiv icon

Comparing normalizing flows and diffusion models for prosody and acoustic modelling in text-to-speech

Jul 31, 2023
Guangyan Zhang, Thomas Merritt, Manuel Sam Ribeiro, Biel Tura-Vecino, Kayoko Yanagisawa, Kamil Pokora, Abdelhamid Ezzerg, Sebastian Cygert, Ammar Abbas, Piotr Bilinski, Roberto Barra-Chicote, Daniel Korzekwa, Jaime Lorenzo-Trueba

Figure 1 for Comparing normalizing flows and diffusion models for prosody and acoustic modelling in text-to-speech
Figure 2 for Comparing normalizing flows and diffusion models for prosody and acoustic modelling in text-to-speech
Figure 3 for Comparing normalizing flows and diffusion models for prosody and acoustic modelling in text-to-speech
Figure 4 for Comparing normalizing flows and diffusion models for prosody and acoustic modelling in text-to-speech
Viaarxiv icon

Remap, warp and attend: Non-parallel many-to-many accent conversion with Normalizing Flows

Nov 10, 2022
Abdelhamid Ezzerg, Thomas Merritt, Kayoko Yanagisawa, Piotr Bilinski, Magdalena Proszewska, Kamil Pokora, Renard Korzeniowski, Roberto Barra-Chicote, Daniel Korzekwa

Figure 1 for Remap, warp and attend: Non-parallel many-to-many accent conversion with Normalizing Flows
Figure 2 for Remap, warp and attend: Non-parallel many-to-many accent conversion with Normalizing Flows
Figure 3 for Remap, warp and attend: Non-parallel many-to-many accent conversion with Normalizing Flows
Figure 4 for Remap, warp and attend: Non-parallel many-to-many accent conversion with Normalizing Flows
Viaarxiv icon

GlowVC: Mel-spectrogram space disentangling model for language-independent text-free voice conversion

Jul 04, 2022
Magdalena Proszewska, Grzegorz Beringer, Daniel Sáez-Trigueros, Thomas Merritt, Abdelhamid Ezzerg, Roberto Barra-Chicote

Figure 1 for GlowVC: Mel-spectrogram space disentangling model for language-independent text-free voice conversion
Figure 2 for GlowVC: Mel-spectrogram space disentangling model for language-independent text-free voice conversion
Figure 3 for GlowVC: Mel-spectrogram space disentangling model for language-independent text-free voice conversion
Figure 4 for GlowVC: Mel-spectrogram space disentangling model for language-independent text-free voice conversion
Viaarxiv icon

Text-free non-parallel many-to-many voice conversion using normalising flows

Mar 15, 2022
Thomas Merritt, Abdelhamid Ezzerg, Piotr Biliński, Magdalena Proszewska, Kamil Pokora, Roberto Barra-Chicote, Daniel Korzekwa

Figure 1 for Text-free non-parallel many-to-many voice conversion using normalising flows
Figure 2 for Text-free non-parallel many-to-many voice conversion using normalising flows
Figure 3 for Text-free non-parallel many-to-many voice conversion using normalising flows
Figure 4 for Text-free non-parallel many-to-many voice conversion using normalising flows
Viaarxiv icon

Enhancing audio quality for expressive Neural Text-to-Speech

Aug 13, 2021
Abdelhamid Ezzerg, Adam Gabrys, Bartosz Putrycz, Daniel Korzekwa, Daniel Saez-Trigueros, David McHardy, Kamil Pokora, Jakub Lachowicz, Jaime Lorenzo-Trueba, Viacheslav Klimkov

Figure 1 for Enhancing audio quality for expressive Neural Text-to-Speech
Figure 2 for Enhancing audio quality for expressive Neural Text-to-Speech
Figure 3 for Enhancing audio quality for expressive Neural Text-to-Speech
Figure 4 for Enhancing audio quality for expressive Neural Text-to-Speech
Viaarxiv icon

Non-Autoregressive TTS with Explicit Duration Modelling for Low-Resource Highly Expressive Speech

Jun 25, 2021
Raahil Shah, Kamil Pokora, Abdelhamid Ezzerg, Viacheslav Klimkov, Goeric Huybrechts, Bartosz Putrycz, Daniel Korzekwa, Thomas Merritt

Figure 1 for Non-Autoregressive TTS with Explicit Duration Modelling for Low-Resource Highly Expressive Speech
Figure 2 for Non-Autoregressive TTS with Explicit Duration Modelling for Low-Resource Highly Expressive Speech
Figure 3 for Non-Autoregressive TTS with Explicit Duration Modelling for Low-Resource Highly Expressive Speech
Figure 4 for Non-Autoregressive TTS with Explicit Duration Modelling for Low-Resource Highly Expressive Speech
Viaarxiv icon