Alert button
Picture for Yuanzhe Chen

Yuanzhe Chen

Alert button

LM-VC: Zero-shot Voice Conversion via Speech Generation based on Language Models

Jun 18, 2023
Zhichao Wang, Yuanzhe Chen, Lei Xie, Qiao Tian, Yuping Wang

Figure 1 for LM-VC: Zero-shot Voice Conversion via Speech Generation based on Language Models
Figure 2 for LM-VC: Zero-shot Voice Conversion via Speech Generation based on Language Models
Figure 3 for LM-VC: Zero-shot Voice Conversion via Speech Generation based on Language Models
Figure 4 for LM-VC: Zero-shot Voice Conversion via Speech Generation based on Language Models

Language model (LM) based audio generation frameworks, e.g., AudioLM, have recently achieved new state-of-the-art performance in zero-shot audio generation. In this paper, we explore the feasibility of LMs for zero-shot voice conversion. An intuitive approach is to follow AudioLM - Tokenizing speech into semantic and acoustic tokens respectively by HuBERT and SoundStream, and converting source semantic tokens to target acoustic tokens conditioned on acoustic tokens of the target speaker. However, such an approach encounters several issues: 1) the linguistic content contained in semantic tokens may get dispersed during multi-layer modeling while the lengthy speech input in the voice conversion task makes contextual learning even harder; 2) the semantic tokens still contain speaker-related information, which may be leaked to the target speech, lowering the target speaker similarity; 3) the generation diversity in the sampling of the LM can lead to unexpected outcomes during inference, leading to unnatural pronunciation and speech quality degradation. To mitigate these problems, we propose LM-VC, a two-stage language modeling approach that generates coarse acoustic tokens for recovering the source linguistic content and target speaker's timbre, and then reconstructs the fine for acoustic details as converted speech. Specifically, to enhance content preservation and facilitates better disentanglement, a masked prefix LM with a mask prediction strategy is used for coarse acoustic modeling. This model is encouraged to recover the masked content from the surrounding context and generate target speech based on the target speaker's utterance and corrupted semantic tokens. Besides, to further alleviate the sampling error in the generation, an external LM, which employs window attention to capture the local acoustic relations, is introduced to participate in the coarse acoustic modeling.

Viaarxiv icon

Multi-level Temporal-channel Speaker Retrieval for Robust Zero-shot Voice Conversion

May 12, 2023
Zhichao Wang, Liumeng Xue, Qiuqiang Kong, Lei Xie, Yuanzhe Chen, Qiao Tian, Yuping Wang

Figure 1 for Multi-level Temporal-channel Speaker Retrieval for Robust Zero-shot Voice Conversion
Figure 2 for Multi-level Temporal-channel Speaker Retrieval for Robust Zero-shot Voice Conversion
Figure 3 for Multi-level Temporal-channel Speaker Retrieval for Robust Zero-shot Voice Conversion
Figure 4 for Multi-level Temporal-channel Speaker Retrieval for Robust Zero-shot Voice Conversion

Zero-shot voice conversion (VC) converts source speech into the voice of any desired speaker using only one utterance of the speaker without requiring additional model updates. Typical methods use a speaker representation from a pre-trained speaker verification (SV) model or learn speaker representation during VC training to achieve zero-shot VC. However, existing speaker modeling methods overlook the variation of speaker information richness in temporal and frequency channel dimensions of speech. This insufficient speaker modeling hampers the ability of the VC model to accurately represent unseen speakers who are not in the training dataset. In this study, we present a robust zero-shot VC model with multi-level temporal-channel retrieval, referred to as MTCR-VC. Specifically, to flexibly adapt to the dynamic-variant speaker characteristic in the temporal and channel axis of the speech, we propose a novel fine-grained speaker modeling method, called temporal-channel retrieval (TCR), to find out when and where speaker information appears in speech. It retrieves variable-length speaker representation from both temporal and channel dimensions under the guidance of a pre-trained SV model. Besides, inspired by the hierarchical process of human speech production, the MTCR speaker module stacks several TCR blocks to extract speaker representations from multi-granularity levels. Furthermore, to achieve better speech disentanglement and reconstruction, we introduce a cycle-based training strategy to simulate zero-shot inference recurrently. We adopt perpetual constraints on three aspects, including content, style, and speaker, to drive this process. Experiments demonstrate that MTCR-VC is superior to the previous zero-shot VC methods in modeling speaker timbre while maintaining good speech naturalness.

* Submitted to TASLP 
Viaarxiv icon

Non-parallel Accent Conversion using Pseudo Siamese Disentanglement Network

Dec 12, 2022
Dongya Jia, Qiao Tian, Jiaxin Li, Yuanzhe Chen, Kainan Peng, Mingbo Ma, Yuping Wang, Yuxuan Wang

Figure 1 for Non-parallel Accent Conversion using Pseudo Siamese Disentanglement Network
Figure 2 for Non-parallel Accent Conversion using Pseudo Siamese Disentanglement Network
Figure 3 for Non-parallel Accent Conversion using Pseudo Siamese Disentanglement Network
Figure 4 for Non-parallel Accent Conversion using Pseudo Siamese Disentanglement Network

The main goal of accent conversion (AC) is to convert the accent of speech into the target accent while preserving the content and timbre. Previous reference-based methods rely on reference utterances in the inference phase, which limits their practical application. What's more, previous reference-free methods mostly require parallel data in the training phase. In this paper, we propose a reference-free method based on non-parallel data from the perspective of feature disentanglement. Pseudo Siamese Disentanglement Network (PSDN) is proposed to disentangle the accent information from the content representation and model the target accent. Besides, a timbre augmentation method is proposed to enhance the ability of timbre retaining for speakers without target-accent data. Experimental results show that the proposed system can convert the accent of native American English speech into Indian accent with higher accentedness (3.47) than the baseline (2.75) and input (1.19). The naturalness of converted speech is also comparable to that of the input.

Viaarxiv icon

Delivering Speaking Style in Low-resource Voice Conversion with Multi-factor Constraints

Nov 16, 2022
Zhichao Wang, Xinsheng Wang, Lei Xie, Yuanzhe Chen, Qiao Tian, Yuping Wang

Figure 1 for Delivering Speaking Style in Low-resource Voice Conversion with Multi-factor Constraints
Figure 2 for Delivering Speaking Style in Low-resource Voice Conversion with Multi-factor Constraints
Figure 3 for Delivering Speaking Style in Low-resource Voice Conversion with Multi-factor Constraints
Figure 4 for Delivering Speaking Style in Low-resource Voice Conversion with Multi-factor Constraints

Conveying the linguistic content and maintaining the source speech's speaking style, such as intonation and emotion, is essential in voice conversion (VC). However, in a low-resource situation, where only limited utterances from the target speaker are accessible, existing VC methods are hard to meet this requirement and capture the target speaker's timber. In this work, a novel VC model, referred to as MFC-StyleVC, is proposed for the low-resource VC task. Specifically, speaker timbre constraint generated by clustering method is newly proposed to guide target speaker timbre learning in different stages. Meanwhile, to prevent over-fitting to the target speaker's limited data, perceptual regularization constraints explicitly maintain model performance on specific aspects, including speaking style, linguistic content, and speech quality. Besides, a simulation mode is introduced to simulate the inference process to alleviate the mismatch between training and inference. Extensive experiments performed on highly expressive speech demonstrate the superiority of the proposed method in low-resource VC.

* Submitted to ICASSP 2023 
Viaarxiv icon

Streaming Voice Conversion Via Intermediate Bottleneck Features And Non-streaming Teacher Guidance

Oct 27, 2022
Yuanzhe Chen, Ming Tu, Tang Li, Xin Li, Qiuqiang Kong, Jiaxin Li, Zhichao Wang, Qiao Tian, Yuping Wang, Yuxuan Wang

Figure 1 for Streaming Voice Conversion Via Intermediate Bottleneck Features And Non-streaming Teacher Guidance
Figure 2 for Streaming Voice Conversion Via Intermediate Bottleneck Features And Non-streaming Teacher Guidance
Figure 3 for Streaming Voice Conversion Via Intermediate Bottleneck Features And Non-streaming Teacher Guidance
Figure 4 for Streaming Voice Conversion Via Intermediate Bottleneck Features And Non-streaming Teacher Guidance

Streaming voice conversion (VC) is the task of converting the voice of one person to another in real-time. Previous streaming VC methods use phonetic posteriorgrams (PPGs) extracted from automatic speech recognition (ASR) systems to represent speaker-independent information. However, PPGs lack the prosody and vocalization information of the source speaker, and streaming PPGs contain undesired leaked timbre of the source speaker. In this paper, we propose to use intermediate bottleneck features (IBFs) to replace PPGs. VC systems trained with IBFs retain more prosody and vocalization information of the source speaker. Furthermore, we propose a non-streaming teacher guidance (TG) framework that addresses the timbre leakage problem. Experiments show that our proposed IBFs and the TG framework achieve a state-of-the-art streaming VC naturalness of 3.85, a content consistency of 3.77, and a timbre similarity of 3.77 under a future receptive field of 160 ms which significantly outperform previous streaming VC systems.

* The paper has been submitted to ICASSP2023 
Viaarxiv icon

Cloning one's voice using very limited data in the wild

Oct 08, 2021
Dongyang Dai, Yuanzhe Chen, Li Chen, Ming Tu, Lu Liu, Rui Xia, Qiao Tian, Yuping Wang, Yuxuan Wang

Figure 1 for Cloning one's voice using very limited data in the wild
Figure 2 for Cloning one's voice using very limited data in the wild
Figure 3 for Cloning one's voice using very limited data in the wild
Figure 4 for Cloning one's voice using very limited data in the wild

With the increasing popularity of speech synthesis products, the industry has put forward more requirements for personalized speech synthesis: (1) How to use low-resource, easily accessible data to clone a person's voice. (2) How to clone a person's voice while controlling the style and prosody. To solve the above two problems, we proposed the Hieratron model framework in which the prosody and timbre are modeled separately using two modules, therefore, the independent control of timbre and the other characteristics of audio can be achieved while generating speech. The practice shows that, for very limited target speaker data in the wild, Hieratron has obvious advantages over the traditional method, in addition to controlling the style and language of the generated speech, the mean opinion score on speech quality of the generated speech has also been improved by more than 0.2 points.

Viaarxiv icon

Understanding Hidden Memories of Recurrent Neural Networks

Oct 30, 2017
Yao Ming, Shaozu Cao, Ruixiang Zhang, Zhen Li, Yuanzhe Chen, Yangqiu Song, Huamin Qu

Figure 1 for Understanding Hidden Memories of Recurrent Neural Networks
Figure 2 for Understanding Hidden Memories of Recurrent Neural Networks
Figure 3 for Understanding Hidden Memories of Recurrent Neural Networks
Figure 4 for Understanding Hidden Memories of Recurrent Neural Networks

Recurrent neural networks (RNNs) have been successfully applied to various natural language processing (NLP) tasks and achieved better results than conventional methods. However, the lack of understanding of the mechanisms behind their effectiveness limits further improvements on their architectures. In this paper, we present a visual analytics method for understanding and comparing RNN models for NLP tasks. We propose a technique to explain the function of individual hidden state units based on their expected response to input texts. We then co-cluster hidden state units and words based on the expected response and visualize co-clustering results as memory chips and word clouds to provide more structured knowledge on RNNs' hidden states. We also propose a glyph-based sequence visualization based on aggregate information to analyze the behavior of an RNN's hidden state at the sentence-level. The usability and effectiveness of our method are demonstrated through case studies and reviews from domain experts.

* Published at IEEE Conference on Visual Analytics Science and Technology (IEEE VAST 2017) 
Viaarxiv icon

Intra-and-Inter-Constraint-based Video Enhancement based on Piecewise Tone Mapping

Feb 21, 2015
Yuanzhe Chen, Weiyao Lin, Chongyang Zhang, Zhenzhong Chen, Ning Xu, Jun Xie

Figure 1 for Intra-and-Inter-Constraint-based Video Enhancement based on Piecewise Tone Mapping
Figure 2 for Intra-and-Inter-Constraint-based Video Enhancement based on Piecewise Tone Mapping
Figure 3 for Intra-and-Inter-Constraint-based Video Enhancement based on Piecewise Tone Mapping
Figure 4 for Intra-and-Inter-Constraint-based Video Enhancement based on Piecewise Tone Mapping

Video enhancement plays an important role in various video applications. In this paper, we propose a new intra-and-inter-constraint-based video enhancement approach aiming to 1) achieve high intra-frame quality of the entire picture where multiple region-of-interests (ROIs) can be adaptively and simultaneously enhanced, and 2) guarantee the inter-frame quality consistencies among video frames. We first analyze features from different ROIs and create a piecewise tone mapping curve for the entire frame such that the intra-frame quality of a frame can be enhanced. We further introduce new inter-frame constraints to improve the temporal quality consistency. Experimental results show that the proposed algorithm obviously outperforms the state-of-the-art algorithms.

* IEEE Trans. Circuits and Systems for Video Technology, vol. 23, no. 1, pp. 74-82, 2013  
* This manuscript is the accepted version for TCSVT (IEEE Transactions on Circuits and Systems for Video Technology) 
Viaarxiv icon

A new network-based algorithm for human activity recognition in video

Feb 21, 2015
Weiyao Lin, Yuanzhe Chen, Jianxin Wu, Hanli Wang, Bin Sheng, Hongxiang Li

Figure 1 for A new network-based algorithm for human activity recognition in video
Figure 2 for A new network-based algorithm for human activity recognition in video
Figure 3 for A new network-based algorithm for human activity recognition in video
Figure 4 for A new network-based algorithm for human activity recognition in video

In this paper, a new network-transmission-based (NTB) algorithm is proposed for human activity recognition in videos. The proposed NTB algorithm models the entire scene as an error-free network. In this network, each node corresponds to a patch of the scene and each edge represents the activity correlation between the corresponding patches. Based on this network, we further model people in the scene as packages while human activities can be modeled as the process of package transmission in the network. By analyzing these specific "package transmission" processes, various activities can be effectively detected. The implementation of our NTB algorithm into abnormal activity detection and group activity recognition are described in detail in the paper. Experimental results demonstrate the effectiveness of our proposed algorithm.

* IEEE Trans. Circuits and Systems for Video Technology, vol. 24, no. 5, pp. 826-841, 2014  
* This manuscript is the accepted version for TCSVT (IEEE Transactions on Circuits and Systems for Video Technology) 
Viaarxiv icon