Alert button
Picture for Jingjing Yin

Jingjing Yin

Alert button

PromptVC: Flexible Stylistic Voice Conversion in Latent Space Driven by Natural Language Prompts

Sep 17, 2023
Jixun Yao, Yuguang Yang, Yi Lei, Ziqian Ning, Yanni Hu, Yu Pan, Jingjing Yin, Hongbin Zhou, Heng Lu, Lei Xie

Style voice conversion aims to transform the style of source speech to a desired style according to real-world application demands. However, the current style voice conversion approach relies on pre-defined labels or reference speech to control the conversion process, which leads to limitations in style diversity or falls short in terms of the intuitive and interpretability of style representation. In this study, we propose PromptVC, a novel style voice conversion approach that employs a latent diffusion model to generate a style vector driven by natural language prompts. Specifically, the style vector is extracted by a style encoder during training, and then the latent diffusion model is trained independently to sample the style vector from noise, with this process being conditioned on natural language prompts. To improve style expressiveness, we leverage HuBERT to extract discrete tokens and replace them with the K-Means center embedding to serve as the linguistic content, which minimizes residual style information. Additionally, we deduplicate the same discrete token and employ a differentiable duration predictor to re-predict the duration of each token, which can adapt the duration of the same linguistic content to different styles. The subjective and objective evaluation results demonstrate the effectiveness of our proposed system.

* Submitted to ICASSP 2024 
Viaarxiv icon

HYBRIDFORMER: improving SqueezeFormer with hybrid attention and NSR mechanism

Mar 15, 2023
Yuguang Yang, Yu Pan, Jingjing Yin, Jiangyu Han, Lei Ma, Heng Lu

Figure 1 for HYBRIDFORMER: improving SqueezeFormer with hybrid attention and NSR mechanism
Figure 2 for HYBRIDFORMER: improving SqueezeFormer with hybrid attention and NSR mechanism
Figure 3 for HYBRIDFORMER: improving SqueezeFormer with hybrid attention and NSR mechanism
Figure 4 for HYBRIDFORMER: improving SqueezeFormer with hybrid attention and NSR mechanism

SqueezeFormer has recently shown impressive performance in automatic speech recognition (ASR). However, its inference speed suffers from the quadratic complexity of softmax-attention (SA). In addition, limited by the large convolution kernel size, the local modeling ability of SqueezeFormer is insufficient. In this paper, we propose a novel method HybridFormer to improve SqueezeFormer in a fast and efficient way. Specifically, we first incorporate linear attention (LA) and propose a hybrid LASA paradigm to increase the model's inference speed. Second, a hybrid neural architecture search (NAS) guided structural re-parameterization (SRep) mechanism, termed NSR, is proposed to enhance the ability of the model to extract local interactions. Extensive experiments conducted on the LibriSpeech dataset demonstrate that our proposed HybridFormer can achieve a 9.1% relative word error rate (WER) reduction over SqueezeFormer on the test-other dataset. Furthermore, when input speech is 30s, the HybridFormer can improve the model's inference speed up to 18%. Our source code is available online.

* Accepted by ICASSP2023 
Viaarxiv icon

LMEC: Learnable Multiplicative Absolute Position Embedding Based Conformer for Speech Recognition

Dec 05, 2022
Yuguang Yang, Yu Pan, Jingjing Yin, Heng Lu

Figure 1 for LMEC: Learnable Multiplicative Absolute Position Embedding Based Conformer for Speech Recognition
Figure 2 for LMEC: Learnable Multiplicative Absolute Position Embedding Based Conformer for Speech Recognition
Figure 3 for LMEC: Learnable Multiplicative Absolute Position Embedding Based Conformer for Speech Recognition
Figure 4 for LMEC: Learnable Multiplicative Absolute Position Embedding Based Conformer for Speech Recognition

This paper proposes a Learnable Multiplicative absolute position Embedding based Conformer (LMEC). It contains a kernelized linear attention (LA) module called LMLA to solve the time-consuming problem for long sequence speech recognition as well as an alternative to the FFN structure. First, the ELU function is adopted as the kernel function of our proposed LA module. Second, we propose a novel Learnable Multiplicative Absolute Position Embedding (LM-APE) based re-weighting mechanism that can reduce the well-known quadratic temporal-space complexity of softmax self-attention. Third, we use Gated Linear Units (GLU) to substitute the Feed Forward Network (FFN) for better performance. Extensive experiments have been conducted on the public LibriSpeech datasets. Compared to the Conformer model with cosFormer style linear attention, our proposed method can achieve up to 0.63% word-error-rate improvement on test-other and improve the inference speed by up to 13% (left product) and 33% (right product) on the LA module.

* NCMMSC2022 
Viaarxiv icon

ESKNet-An enhanced adaptive selection kernel convolution for breast tumors segmentation

Nov 05, 2022
Gongping Chen, Jianxun Zhang, Yuming Liu, Jingjing Yin, Xiaotao Yin, Liang Cui, Yu Dai

Figure 1 for ESKNet-An enhanced adaptive selection kernel convolution for breast tumors segmentation
Figure 2 for ESKNet-An enhanced adaptive selection kernel convolution for breast tumors segmentation
Figure 3 for ESKNet-An enhanced adaptive selection kernel convolution for breast tumors segmentation
Figure 4 for ESKNet-An enhanced adaptive selection kernel convolution for breast tumors segmentation

Breast cancer is one of the common cancers that endanger the health of women globally. Accurate target lesion segmentation is essential for early clinical intervention and postoperative follow-up. Recently, many convolutional neural networks (CNNs) have been proposed to segment breast tumors from ultrasound images. However, the complex ultrasound pattern and the variable tumor shape and size bring challenges to the accurate segmentation of the breast lesion. Motivated by the selective kernel convolution, we introduce an enhanced selective kernel convolution for breast tumor segmentation, which integrates multiple feature map region representations and adaptively recalibrates the weights of these feature map regions from the channel and spatial dimensions. This region recalibration strategy enables the network to focus more on high-contributing region features and mitigate the perturbation of less useful regions. Finally, the enhanced selective kernel convolution is integrated into U-net with deep supervision constraints to adaptively capture the robust representation of breast tumors. Extensive experiments with twelve state-of-the-art deep learning segmentation methods on three public breast ultrasound datasets demonstrate that our method has a more competitive segmentation performance in breast ultrasound images.

* 12 pages, 8 figures 
Viaarxiv icon