Alert button
Picture for Qicong Xie

Qicong Xie

Alert button

MSM-VC: High-fidelity Source Style Transfer for Non-Parallel Voice Conversion by Multi-scale Style Modeling

Add code
Bookmark button
Alert button
Sep 03, 2023
Zhichao Wang, Xinsheng Wang, Qicong Xie, Tao Li, Lei Xie, Qiao Tian, Yuping Wang

Figure 1 for MSM-VC: High-fidelity Source Style Transfer for Non-Parallel Voice Conversion by Multi-scale Style Modeling
Figure 2 for MSM-VC: High-fidelity Source Style Transfer for Non-Parallel Voice Conversion by Multi-scale Style Modeling
Figure 3 for MSM-VC: High-fidelity Source Style Transfer for Non-Parallel Voice Conversion by Multi-scale Style Modeling
Figure 4 for MSM-VC: High-fidelity Source Style Transfer for Non-Parallel Voice Conversion by Multi-scale Style Modeling
Viaarxiv icon

UniSyn: An End-to-End Unified Model for Text-to-Speech and Singing Voice Synthesis

Add code
Bookmark button
Alert button
Dec 06, 2022
Yi Lei, Shan Yang, Xinsheng Wang, Qicong Xie, Jixun Yao, Lei Xie, Dan Su

Figure 1 for UniSyn: An End-to-End Unified Model for Text-to-Speech and Singing Voice Synthesis
Figure 2 for UniSyn: An End-to-End Unified Model for Text-to-Speech and Singing Voice Synthesis
Figure 3 for UniSyn: An End-to-End Unified Model for Text-to-Speech and Singing Voice Synthesis
Figure 4 for UniSyn: An End-to-End Unified Model for Text-to-Speech and Singing Voice Synthesis
Viaarxiv icon

Expressive-VC: Highly Expressive Voice Conversion with Attention Fusion of Bottleneck and Perturbation Features

Add code
Bookmark button
Alert button
Nov 09, 2022
Ziqian Ning, Qicong Xie, Pengcheng Zhu, Zhichao Wang, Liumeng Xue, Jixun Yao, Lei Xie, Mengxiao Bi

Figure 1 for Expressive-VC: Highly Expressive Voice Conversion with Attention Fusion of Bottleneck and Perturbation Features
Figure 2 for Expressive-VC: Highly Expressive Voice Conversion with Attention Fusion of Bottleneck and Perturbation Features
Figure 3 for Expressive-VC: Highly Expressive Voice Conversion with Attention Fusion of Bottleneck and Perturbation Features
Figure 4 for Expressive-VC: Highly Expressive Voice Conversion with Attention Fusion of Bottleneck and Perturbation Features
Viaarxiv icon

Cross-speaker Emotion Transfer Based On Prosody Compensation for End-to-End Speech Synthesis

Add code
Bookmark button
Alert button
Jul 04, 2022
Tao Li, Xinsheng Wang, Qicong Xie, Zhichao Wang, Mingqi Jiang, Lei Xie

Figure 1 for Cross-speaker Emotion Transfer Based On Prosody Compensation for End-to-End Speech Synthesis
Figure 2 for Cross-speaker Emotion Transfer Based On Prosody Compensation for End-to-End Speech Synthesis
Figure 3 for Cross-speaker Emotion Transfer Based On Prosody Compensation for End-to-End Speech Synthesis
Figure 4 for Cross-speaker Emotion Transfer Based On Prosody Compensation for End-to-End Speech Synthesis
Viaarxiv icon

End-to-End Voice Conversion with Information Perturbation

Add code
Bookmark button
Alert button
Jun 15, 2022
Qicong Xie, Shan Yang, Yi Lei, Lei Xie, Dan Su

Figure 1 for End-to-End Voice Conversion with Information Perturbation
Figure 2 for End-to-End Voice Conversion with Information Perturbation
Figure 3 for End-to-End Voice Conversion with Information Perturbation
Figure 4 for End-to-End Voice Conversion with Information Perturbation
Viaarxiv icon

Multi-speaker Multi-style Text-to-speech Synthesis With Single-speaker Single-style Training Data Scenarios

Add code
Bookmark button
Alert button
Dec 23, 2021
Qicong Xie, Tao Li, Xinsheng Wang, Zhichao Wang, Lei Xie, Guoqiao Yu, Guanglu Wan

Figure 1 for Multi-speaker Multi-style Text-to-speech Synthesis With Single-speaker Single-style Training Data Scenarios
Figure 2 for Multi-speaker Multi-style Text-to-speech Synthesis With Single-speaker Single-style Training Data Scenarios
Figure 3 for Multi-speaker Multi-style Text-to-speech Synthesis With Single-speaker Single-style Training Data Scenarios
Figure 4 for Multi-speaker Multi-style Text-to-speech Synthesis With Single-speaker Single-style Training Data Scenarios
Viaarxiv icon

One-shot Voice Conversion For Style Transfer Based On Speaker Adaptation

Add code
Bookmark button
Alert button
Nov 24, 2021
Zhichao Wang, Qicong Xie, Tao Li, Hongqiang Du, Lei Xie, Pengcheng Zhu, Mengxiao Bi

Viaarxiv icon

Controllable cross-speaker emotion transfer for end-to-end speech synthesis

Add code
Bookmark button
Alert button
Sep 14, 2021
Tao Li, Xinsheng Wang, Qicong Xie, Zhichao Wang, Lei Xie

Figure 1 for Controllable cross-speaker emotion transfer for end-to-end speech synthesis
Figure 2 for Controllable cross-speaker emotion transfer for end-to-end speech synthesis
Figure 3 for Controllable cross-speaker emotion transfer for end-to-end speech synthesis
Figure 4 for Controllable cross-speaker emotion transfer for end-to-end speech synthesis
Viaarxiv icon

AnyoneNet: Synchronized Speech and Talking Head Generation for Arbitrary Person

Add code
Bookmark button
Alert button
Aug 11, 2021
Xinsheng Wang, Qicong Xie, Jihua Zhu, Lei Xie, Scharenborg

Figure 1 for AnyoneNet: Synchronized Speech and Talking Head Generation for Arbitrary Person
Figure 2 for AnyoneNet: Synchronized Speech and Talking Head Generation for Arbitrary Person
Figure 3 for AnyoneNet: Synchronized Speech and Talking Head Generation for Arbitrary Person
Figure 4 for AnyoneNet: Synchronized Speech and Talking Head Generation for Arbitrary Person
Viaarxiv icon

The Multi-speaker Multi-style Voice Cloning Challenge 2021

Add code
Bookmark button
Alert button
Apr 05, 2021
Qicong Xie, Xiaohai Tian, Guanghou Liu, Kun Song, Lei Xie, Zhiyong Wu, Hai Li, Song Shi, Haizhou Li, Fen Hong, Hui Bu, Xin Xu

Figure 1 for The Multi-speaker Multi-style Voice Cloning Challenge 2021
Figure 2 for The Multi-speaker Multi-style Voice Cloning Challenge 2021
Viaarxiv icon