Alert button
Picture for Zhiqiang Lv

Zhiqiang Lv

Alert button

MFA-Conformer: Multi-scale Feature Aggregation Conformer for Automatic Speaker Verification

Mar 29, 2022
Yang Zhang, Zhiqiang Lv, Haibin Wu, Shanshan Zhang, Pengfei Hu, Zhiyong Wu, Hung-yi Lee, Helen Meng

Figure 1 for MFA-Conformer: Multi-scale Feature Aggregation Conformer for Automatic Speaker Verification
Figure 2 for MFA-Conformer: Multi-scale Feature Aggregation Conformer for Automatic Speaker Verification
Figure 3 for MFA-Conformer: Multi-scale Feature Aggregation Conformer for Automatic Speaker Verification
Figure 4 for MFA-Conformer: Multi-scale Feature Aggregation Conformer for Automatic Speaker Verification

In this paper, we present Multi-scale Feature Aggregation Conformer (MFA-Conformer), an easy-to-implement, simple but effective backbone for automatic speaker verification based on the Convolution-augmented Transformer (Conformer). The architecture of the MFA-Conformer is inspired by recent state-of-the-art models in speech recognition and speaker verification. Firstly, we introduce a convolution sub-sampling layer to decrease the computational cost of the model. Secondly, we adopt Conformer blocks which combine Transformers and convolution neural networks (CNNs) to capture global and local features effectively. Finally, the output feature maps from all Conformer blocks are concatenated to aggregate multi-scale representations before final pooling. We evaluate the MFA-Conformer on the widely used benchmarks. The best system obtains 0.64%, 1.29% and 1.63% EER on VoxCeleb1-O, SITW.Dev, and SITW.Eval set, respectively. MFA-Conformer significantly outperforms the popular ECAPA-TDNN systems in both recognition performance and inference speed. Last but not the least, the ablation studies clearly demonstrate that the combination of global and local feature learning can lead to robust and accurate speaker embedding extraction. We will release the code for future works to do comparison.

* submitted to INTERSPEECH 2022 
Viaarxiv icon

VRM-Phase I VKW system description of long-short video customizable keyword wakeup challenge

Oct 18, 2021
Yougen Yuan, Zhiqiang Lv, Shen Huang, Pengfei Hu

Keyword wakeup technology has always been a research hotspot in speech processing, but many related works were done on different datasets. We organized a Chinese long-short video keyword wakeup challenge (Video Keyword Wakeup Challenge, VKW) for testing the ability of each participating team to build a keyword wakeup system under the public dataset. All submitted systems not only need to support the setting of multiple different keywords, but also need to support the wakeup of any costumed keyword.This paper mainly describes the basic situation of the VKW challenge and the experimental results of some participating teams.

* 6 pages, in Chinese language, 3 tables, NCMMC 2021 conference paper 
Viaarxiv icon