Alert button
Picture for June Sig Sung

June Sig Sung

Alert button

Controllable speech synthesis by learning discrete phoneme-level prosodic representations

Nov 29, 2022
Nikolaos Ellinas, Myrsini Christidou, Alexandra Vioni, June Sig Sung, Aimilios Chalamandaris, Pirros Tsiakoulis, Paris Mastorocostas

Figure 1 for Controllable speech synthesis by learning discrete phoneme-level prosodic representations
Figure 2 for Controllable speech synthesis by learning discrete phoneme-level prosodic representations
Figure 3 for Controllable speech synthesis by learning discrete phoneme-level prosodic representations
Figure 4 for Controllable speech synthesis by learning discrete phoneme-level prosodic representations
Viaarxiv icon

Predicting phoneme-level prosody latents using AR and flow-based Prior Networks for expressive speech synthesis

Nov 02, 2022
Konstantinos Klapsas, Karolos Nikitaras, Nikolaos Ellinas, June Sig Sung, Inchul Hwang, Spyros Raptis, Aimilios Chalamandaris, Pirros Tsiakoulis

Figure 1 for Predicting phoneme-level prosody latents using AR and flow-based Prior Networks for expressive speech synthesis
Figure 2 for Predicting phoneme-level prosody latents using AR and flow-based Prior Networks for expressive speech synthesis
Figure 3 for Predicting phoneme-level prosody latents using AR and flow-based Prior Networks for expressive speech synthesis
Figure 4 for Predicting phoneme-level prosody latents using AR and flow-based Prior Networks for expressive speech synthesis
Viaarxiv icon

Learning utterance-level representations through token-level acoustic latents prediction for Expressive Speech Synthesis

Nov 01, 2022
Karolos Nikitaras, Konstantinos Klapsas, Nikolaos Ellinas, Georgia Maniati, June Sig Sung, Inchul Hwang, Spyros Raptis, Aimilios Chalamandaris, Pirros Tsiakoulis

Figure 1 for Learning utterance-level representations through token-level acoustic latents prediction for Expressive Speech Synthesis
Figure 2 for Learning utterance-level representations through token-level acoustic latents prediction for Expressive Speech Synthesis
Figure 3 for Learning utterance-level representations through token-level acoustic latents prediction for Expressive Speech Synthesis
Figure 4 for Learning utterance-level representations through token-level acoustic latents prediction for Expressive Speech Synthesis
Viaarxiv icon

Generating Gender-Ambiguous Text-to-Speech Voices

Nov 01, 2022
Konstantinos Markopoulos, Georgia Maniati, Georgios Vamvoukakis, Nikolaos Ellinas, Karolos Nikitaras, Konstantinos Klapsas, Georgios Vardaxoglou, Panos Kakoulidis, June Sig Sung, Inchul Hwang, Aimilios Chalamandaris, Pirros Tsiakoulis, Spyros Raptis

Figure 1 for Generating Gender-Ambiguous Text-to-Speech Voices
Figure 2 for Generating Gender-Ambiguous Text-to-Speech Voices
Figure 3 for Generating Gender-Ambiguous Text-to-Speech Voices
Figure 4 for Generating Gender-Ambiguous Text-to-Speech Voices
Viaarxiv icon

Investigating Content-Aware Neural Text-To-Speech MOS Prediction Using Prosodic and Linguistic Features

Nov 01, 2022
Alexandra Vioni, Georgia Maniati, Nikolaos Ellinas, June Sig Sung, Inchul Hwang, Aimilios Chalamandaris, Pirros Tsiakoulis

Figure 1 for Investigating Content-Aware Neural Text-To-Speech MOS Prediction Using Prosodic and Linguistic Features
Figure 2 for Investigating Content-Aware Neural Text-To-Speech MOS Prediction Using Prosodic and Linguistic Features
Figure 3 for Investigating Content-Aware Neural Text-To-Speech MOS Prediction Using Prosodic and Linguistic Features
Figure 4 for Investigating Content-Aware Neural Text-To-Speech MOS Prediction Using Prosodic and Linguistic Features
Viaarxiv icon

Cross-lingual Text-To-Speech with Flow-based Voice Conversion for Improved Pronunciation

Oct 31, 2022
Nikolaos Ellinas, Georgios Vamvoukakis, Konstantinos Markopoulos, Georgia Maniati, Panos Kakoulidis, June Sig Sung, Inchul Hwang, Spyros Raptis, Aimilios Chalamandaris, Pirros Tsiakoulis

Figure 1 for Cross-lingual Text-To-Speech with Flow-based Voice Conversion for Improved Pronunciation
Figure 2 for Cross-lingual Text-To-Speech with Flow-based Voice Conversion for Improved Pronunciation
Figure 3 for Cross-lingual Text-To-Speech with Flow-based Voice Conversion for Improved Pronunciation
Figure 4 for Cross-lingual Text-To-Speech with Flow-based Voice Conversion for Improved Pronunciation
Viaarxiv icon

Fine-grained Noise Control for Multispeaker Speech Synthesis

Apr 11, 2022
Karolos Nikitaras, Georgios Vamvoukakis, Nikolaos Ellinas, Konstantinos Klapsas, Konstantinos Markopoulos, Spyros Raptis, June Sig Sung, Gunu Jho, Aimilios Chalamandaris, Pirros Tsiakoulis

Figure 1 for Fine-grained Noise Control for Multispeaker Speech Synthesis
Figure 2 for Fine-grained Noise Control for Multispeaker Speech Synthesis
Figure 3 for Fine-grained Noise Control for Multispeaker Speech Synthesis
Figure 4 for Fine-grained Noise Control for Multispeaker Speech Synthesis
Viaarxiv icon

Karaoker: Alignment-free singing voice synthesis with speech training data

Apr 08, 2022
Panos Kakoulidis, Nikolaos Ellinas, Georgios Vamvoukakis, Konstantinos Markopoulos, June Sig Sung, Gunu Jho, Pirros Tsiakoulis, Aimilios Chalamandaris

Figure 1 for Karaoker: Alignment-free singing voice synthesis with speech training data
Figure 2 for Karaoker: Alignment-free singing voice synthesis with speech training data
Figure 3 for Karaoker: Alignment-free singing voice synthesis with speech training data
Viaarxiv icon

Self supervised learning for robust voice cloning

Apr 07, 2022
Konstantinos Klapsas, Nikolaos Ellinas, Karolos Nikitaras, Georgios Vamvoukakis, Panos Kakoulidis, Konstantinos Markopoulos, Spyros Raptis, June Sig Sung, Gunu Jho, Aimilios Chalamandaris, Pirros Tsiakoulis

Figure 1 for Self supervised learning for robust voice cloning
Figure 2 for Self supervised learning for robust voice cloning
Figure 3 for Self supervised learning for robust voice cloning
Viaarxiv icon

SOMOS: The Samsung Open MOS Dataset for the Evaluation of Neural Text-to-Speech Synthesis

Apr 06, 2022
Georgia Maniati, Alexandra Vioni, Nikolaos Ellinas, Karolos Nikitaras, Konstantinos Klapsas, June Sig Sung, Gunu Jho, Aimilios Chalamandaris, Pirros Tsiakoulis

Figure 1 for SOMOS: The Samsung Open MOS Dataset for the Evaluation of Neural Text-to-Speech Synthesis
Figure 2 for SOMOS: The Samsung Open MOS Dataset for the Evaluation of Neural Text-to-Speech Synthesis
Figure 3 for SOMOS: The Samsung Open MOS Dataset for the Evaluation of Neural Text-to-Speech Synthesis
Figure 4 for SOMOS: The Samsung Open MOS Dataset for the Evaluation of Neural Text-to-Speech Synthesis
Viaarxiv icon