Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

Picture for Yanmin Qian

Self-Supervised Learning Based Domain Adaptation for Robust Speaker Verification


Aug 31, 2021
Zhengyang Chen, Shuai Wang, Yanmin Qian

* Published in: ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 

  Access Paper or Ask Questions

Basis-MelGAN: Efficient Neural Vocoder Based on Audio Decomposition


Jun 25, 2021
Zhengxi Liu, Yanmin Qian

* Accepted to INTERSPEECH 2021 

  Access Paper or Ask Questions

Dual-Path Modeling for Long Recording Speech Separation in Meetings


Feb 23, 2021
Chenda Li, Zhuo Chen, Yi Luo, Cong Han, Tianyan Zhou, Keisuke Kinoshita, Marc Delcroix, Shinji Watanabe, Yanmin Qian

* Accepted by ICASSP 2021 

  Access Paper or Ask Questions

End-to-End Dereverberation, Beamforming, and Speech Recognition with Improved Numerical Stability and Advanced Frontend


Feb 23, 2021
Wangyou Zhang, Christoph Boeddeker, Shinji Watanabe, Tomohiro Nakatani, Marc Delcroix, Keisuke Kinoshita, Tsubasa Ochiai, Naoyuki Kamo, Reinhold Haeb-Umbach, Yanmin Qian

* 5 pages, 1 figure, accepted by ICASSP 2021 

  Access Paper or Ask Questions

The Accented English Speech Recognition Challenge 2020: Open Datasets, Tracks, Baselines, Results and Methods


Feb 20, 2021
Xian Shi, Fan Yu, Yizhou Lu, Yuhao Liang, Qiangze Feng, Daliang Wang, Yanmin Qian, Lei Xie

* Accepted by ICASSP 2021 

  Access Paper or Ask Questions

AISPEECH-SJTU accent identification system for the Accented English Speech Recognition Challenge


Feb 19, 2021
Houjun Huang, Xu Xiang, Yexin Yang, Rao Ma, Yanmin Qian

* Accepted to ICASSP 2021 

  Access Paper or Ask Questions

Unit selection synthesis based data augmentation for fixed phrase speaker verification


Feb 19, 2021
Houjun Huang, Xu Xiang, Fei Zhao, Shuai Wang, Yanmin Qian

* Accepted to ICASSP 2021 

  Access Paper or Ask Questions

Data Augmentation for End-to-end Code-switching Speech Recognition


Nov 04, 2020
Chenpeng Du, Hao Li, Yizhou Lu, Lan Wang, Yanmin Qian

* Accepted by SLT2021 

  Access Paper or Ask Questions

Future Vector Enhanced LSTM Language Model for LVCSR


Jul 31, 2020
Qi Liu, Yanmin Qian, Kai Yu

* Accepted by ASRU-2017 

  Access Paper or Ask Questions

End-to-End Multi-speaker Speech Recognition with Transformer


Feb 13, 2020
Xuankai Chang, Wangyou Zhang, Yanmin Qian, Jonathan Le Roux, Shinji Watanabe

* To appear in ICASSP 2020 

  Access Paper or Ask Questions

MIMO-SPEECH: End-to-End Multi-Channel Multi-Speaker Speech Recognition


Oct 16, 2019
Xuankai Chang, Wangyou Zhang, Yanmin Qian, Jonathan Le Roux, Shinji Watanabe

* Accepted at ASRU 2019 

  Access Paper or Ask Questions

Margin Matters: Towards More Discriminative Deep Neural Network Embeddings for Speaker Recognition


Jun 18, 2019
Xu Xiang, Shuai Wang, Houjun Huang, Yanmin Qian, Kai Yu

* not accepted by INTERSPEECH 2019 

  Access Paper or Ask Questions

End-to-End Monaural Multi-speaker ASR System without Pretraining


Nov 05, 2018
Xuankai Chang, Yanmin Qian, Kai Yu, Shinji Watanabe

* submitted to ICASSP2019 

  Access Paper or Ask Questions

Sequence Discriminative Training for Deep Learning based Acoustic Keyword Spotting


Aug 02, 2018
Zhehuai Chen, Yanmin Qian, Kai Yu

* Speech Communication, vol. 102, 100-111, 2018 
* accepted by Speech Communication, 08/02/2018 

  Access Paper or Ask Questions

Single-Channel Multi-talker Speech Recognition with Permutation Invariant Training


Jul 19, 2017
Yanmin Qian, Xuankai Chang, Dong Yu

* 11 pages, 6 figures, Submitted to IEEE/ACM Transactions on Audio, Speech and Language Processing. arXiv admin note: text overlap with arXiv:1704.01985 

  Access Paper or Ask Questions

Recognizing Multi-talker Speech with Permutation Invariant Training


Jun 19, 2017
Dong Yu, Xuankai Chang, Yanmin Qian

* 5 pages, 6 figures, InterSpeech2017 

  Access Paper or Ask Questions

Very Deep Convolutional Neural Networks for Robust Speech Recognition


Oct 02, 2016
Yanmin Qian, Philip C Woodland

* accepted by SLT 2016 

  Access Paper or Ask Questions