Alert button
Picture for Kyungguen Byun

Kyungguen Byun

Alert button

Stylebook: Content-Dependent Speaking Style Modeling for Any-to-Any Voice Conversion using Only Speech Data

Add code
Bookmark button
Alert button
Sep 12, 2023
Hyungseob Lim, Kyungguen Byun, Sunkuk Moon, Erik Visser

Figure 1 for Stylebook: Content-Dependent Speaking Style Modeling for Any-to-Any Voice Conversion using Only Speech Data
Figure 2 for Stylebook: Content-Dependent Speaking Style Modeling for Any-to-Any Voice Conversion using Only Speech Data
Figure 3 for Stylebook: Content-Dependent Speaking Style Modeling for Any-to-Any Voice Conversion using Only Speech Data
Figure 4 for Stylebook: Content-Dependent Speaking Style Modeling for Any-to-Any Voice Conversion using Only Speech Data
Viaarxiv icon

Highly Controllable Diffusion-based Any-to-Any Voice Conversion Model with Frame-level Prosody Feature

Add code
Bookmark button
Alert button
Sep 06, 2023
Kyungguen Byun, Sunkuk Moon, Erik Visser

Figure 1 for Highly Controllable Diffusion-based Any-to-Any Voice Conversion Model with Frame-level Prosody Feature
Figure 2 for Highly Controllable Diffusion-based Any-to-Any Voice Conversion Model with Frame-level Prosody Feature
Figure 3 for Highly Controllable Diffusion-based Any-to-Any Voice Conversion Model with Frame-level Prosody Feature
Figure 4 for Highly Controllable Diffusion-based Any-to-Any Voice Conversion Model with Frame-level Prosody Feature
Viaarxiv icon

Facetron: Multi-speaker Face-to-Speech Model based on Cross-modal Latent Representations

Add code
Bookmark button
Alert button
Jul 26, 2021
Se-Yun Um, Jihyun Kim, Jihyun Lee, Sangshin Oh, Kyungguen Byun, Hong-Goo Kang

Figure 1 for Facetron: Multi-speaker Face-to-Speech Model based on Cross-modal Latent Representations
Figure 2 for Facetron: Multi-speaker Face-to-Speech Model based on Cross-modal Latent Representations
Figure 3 for Facetron: Multi-speaker Face-to-Speech Model based on Cross-modal Latent Representations
Figure 4 for Facetron: Multi-speaker Face-to-Speech Model based on Cross-modal Latent Representations
Viaarxiv icon

ExcitNet vocoder: A neural excitation model for parametric speech synthesis systems

Add code
Bookmark button
Alert button
Nov 09, 2018
Eunwoo Song, Kyungguen Byun, Hong-Goo Kang

Figure 1 for ExcitNet vocoder: A neural excitation model for parametric speech synthesis systems
Figure 2 for ExcitNet vocoder: A neural excitation model for parametric speech synthesis systems
Figure 3 for ExcitNet vocoder: A neural excitation model for parametric speech synthesis systems
Figure 4 for ExcitNet vocoder: A neural excitation model for parametric speech synthesis systems
Viaarxiv icon

Speaker-adaptive neural vocoders for statistical parametric speech synthesis systems

Add code
Bookmark button
Alert button
Nov 08, 2018
Eunwoo Song, Jinseob Kim, Kyungguen Byun, Hong-Goo Kang

Figure 1 for Speaker-adaptive neural vocoders for statistical parametric speech synthesis systems
Figure 2 for Speaker-adaptive neural vocoders for statistical parametric speech synthesis systems
Figure 3 for Speaker-adaptive neural vocoders for statistical parametric speech synthesis systems
Figure 4 for Speaker-adaptive neural vocoders for statistical parametric speech synthesis systems
Viaarxiv icon