Alert button
Picture for Na Hu

Na Hu

Alert button

Learning Noise-independent Speech Representation for High-quality Voice Conversion for Noisy Target Speakers

Add code
Bookmark button
Alert button
Jul 02, 2022
Liumeng Xue, Shan Yang, Na Hu, Dan Su, Lei Xie

Figure 1 for Learning Noise-independent Speech Representation for High-quality Voice Conversion for Noisy Target Speakers
Figure 2 for Learning Noise-independent Speech Representation for High-quality Voice Conversion for Noisy Target Speakers
Figure 3 for Learning Noise-independent Speech Representation for High-quality Voice Conversion for Noisy Target Speakers
Figure 4 for Learning Noise-independent Speech Representation for High-quality Voice Conversion for Noisy Target Speakers
Viaarxiv icon

Controllable Context-aware Conversational Speech Synthesis

Add code
Bookmark button
Alert button
Jun 21, 2021
Jian Cong, Shan Yang, Na Hu, Guangzhi Li, Lei Xie, Dan Su

Figure 1 for Controllable Context-aware Conversational Speech Synthesis
Figure 2 for Controllable Context-aware Conversational Speech Synthesis
Figure 3 for Controllable Context-aware Conversational Speech Synthesis
Figure 4 for Controllable Context-aware Conversational Speech Synthesis
Viaarxiv icon

VARA-TTS: Non-Autoregressive Text-to-Speech Synthesis based on Very Deep VAE with Residual Attention

Add code
Bookmark button
Alert button
Feb 12, 2021
Peng Liu, Yuewen Cao, Songxiang Liu, Na Hu, Guangzhi Li, Chao Weng, Dan Su

Figure 1 for VARA-TTS: Non-Autoregressive Text-to-Speech Synthesis based on Very Deep VAE with Residual Attention
Figure 2 for VARA-TTS: Non-Autoregressive Text-to-Speech Synthesis based on Very Deep VAE with Residual Attention
Figure 3 for VARA-TTS: Non-Autoregressive Text-to-Speech Synthesis based on Very Deep VAE with Residual Attention
Figure 4 for VARA-TTS: Non-Autoregressive Text-to-Speech Synthesis based on Very Deep VAE with Residual Attention
Viaarxiv icon

Phonetic Posteriorgrams based Many-to-Many Singing Voice Conversion via Adversarial Training

Add code
Bookmark button
Alert button
Dec 03, 2020
Haohan Guo, Heng Lu, Na Hu, Chunlei Zhang, Shan Yang, Lei Xie, Dan Su, Dong Yu

Figure 1 for Phonetic Posteriorgrams based Many-to-Many Singing Voice Conversion via Adversarial Training
Figure 2 for Phonetic Posteriorgrams based Many-to-Many Singing Voice Conversion via Adversarial Training
Figure 3 for Phonetic Posteriorgrams based Many-to-Many Singing Voice Conversion via Adversarial Training
Figure 4 for Phonetic Posteriorgrams based Many-to-Many Singing Voice Conversion via Adversarial Training
Viaarxiv icon

DurIAN: Duration Informed Attention Network For Multimodal Synthesis

Add code
Bookmark button
Alert button
Sep 05, 2019
Chengzhu Yu, Heng Lu, Na Hu, Meng Yu, Chao Weng, Kun Xu, Peng Liu, Deyi Tuo, Shiyin Kang, Guangzhi Lei, Dan Su, Dong Yu

Figure 1 for DurIAN: Duration Informed Attention Network For Multimodal Synthesis
Figure 2 for DurIAN: Duration Informed Attention Network For Multimodal Synthesis
Figure 3 for DurIAN: Duration Informed Attention Network For Multimodal Synthesis
Figure 4 for DurIAN: Duration Informed Attention Network For Multimodal Synthesis
Viaarxiv icon