Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

Picture for Xuankai Chang

An Exploration of Self-Supervised Pretrained Representations for End-to-End Speech Recognition


Oct 09, 2021
Xuankai Chang, Takashi Maekaku, Pengcheng Guo, Jing Shi, Yen-Ju Lu, Aswin Shanmugam Subramanian, Tianzi Wang, Shu-wen Yang, Yu Tsao, Hung-yi Lee, Shinji Watanabe

* To appear in ASRU2021 

  Access Paper or Ask Questions

Streaming End-to-End ASR based on Blockwise Non-Autoregressive Models


Jul 20, 2021
Tianzi Wang, Yuya Fujita, Xuankai Chang, Shinji Watanabe

* 5 pages, 1 figures, Interspeech21 conference 

  Access Paper or Ask Questions

Speech Representation Learning Combining Conformer CPC with Deep Cluster for the ZeroSpeech Challenge 2021


Jul 13, 2021
Takashi Maekaku, Xuankai Chang, Yuya Fujita, Li-Wei Chen, Shinji Watanabe, Alexander Rudnicky


  Access Paper or Ask Questions

Multi-Speaker ASR Combining Non-Autoregressive Conformer CTC and Conditional Speaker Chain


Jun 16, 2021
Pengcheng Guo, Xuankai Chang, Shinji Watanabe, Lei Xie

* Accepted by Interspeech 2021 

  Access Paper or Ask Questions

SUPERB: Speech processing Universal PERformance Benchmark


May 03, 2021
Shu-wen Yang, Po-Han Chi, Yung-Sung Chuang, Cheng-I Jeff Lai, Kushal Lakhotia, Yist Y. Lin, Andy T. Liu, Jiatong Shi, Xuankai Chang, Guan-Ting Lin, Tzu-Hsien Huang, Wei-Cheng Tseng, Ko-tik Lee, Da-Rong Liu, Zili Huang, Shuyan Dong, Shang-Wen Li, Shinji Watanabe, Abdelrahman Mohamed, Hung-yi Lee

* Submitted to Interspeech 2021 

  Access Paper or Ask Questions

Hypothesis Stitcher for End-to-End Speaker-attributed ASR on Long-form Multi-talker Recordings


Jan 06, 2021
Xuankai Chang, Naoyuki Kanda, Yashesh Gaur, Xiaofei Wang, Zhong Meng, Takuya Yoshioka

* Submitted to ICASSP 2021 

  Access Paper or Ask Questions

The 2020 ESPnet update: new features, broadened applications, performance improvements, and future plans


Dec 23, 2020
Shinji Watanabe, Florian Boyer, Xuankai Chang, Pengcheng Guo, Tomoki Hayashi, Yosuke Higuchi, Takaaki Hori, Wen-Chin Huang, Hirofumi Inaguma, Naoyuki Kamo, Shigeki Karita, Chenda Li, Jing Shi, Aswin Shanmugam Subramanian, Wangyou Zhang


  Access Paper or Ask Questions

Investigation of End-To-End Speaker-Attributed ASR for Continuous Multi-Talker Recordings


Aug 11, 2020
Naoyuki Kanda, Xuankai Chang, Yashesh Gaur, Xiaofei Wang, Zhong Meng, Zhuo Chen, Takuya Yoshioka


  Access Paper or Ask Questions

CHiME-6 Challenge:Tackling Multispeaker Speech Recognition for Unsegmented Recordings


May 02, 2020
Shinji Watanabe, Michael Mandel, Jon Barker, Emmanuel Vincent, Ashish Arora, Xuankai Chang, Sanjeev Khudanpur, Vimal Manohar, Daniel Povey, Desh Raj, David Snyder, Aswin Shanmugam Subramanian, Jan Trmal, Bar Ben Yair, Christoph Boeddeker, Zhaoheng Ni, Yusuke Fujita, Shota Horiguchi, Naoyuki Kanda, Takuya Yoshioka, Neville Ryant


  Access Paper or Ask Questions

End-to-End Multi-speaker Speech Recognition with Transformer


Feb 13, 2020
Xuankai Chang, Wangyou Zhang, Yanmin Qian, Jonathan Le Roux, Shinji Watanabe

* To appear in ICASSP 2020 

  Access Paper or Ask Questions

MIMO-SPEECH: End-to-End Multi-Channel Multi-Speaker Speech Recognition


Oct 16, 2019
Xuankai Chang, Wangyou Zhang, Yanmin Qian, Jonathan Le Roux, Shinji Watanabe

* Accepted at ASRU 2019 

  Access Paper or Ask Questions

End-to-End Monaural Multi-speaker ASR System without Pretraining


Nov 05, 2018
Xuankai Chang, Yanmin Qian, Kai Yu, Shinji Watanabe

* submitted to ICASSP2019 

  Access Paper or Ask Questions

Single-Channel Multi-talker Speech Recognition with Permutation Invariant Training


Jul 19, 2017
Yanmin Qian, Xuankai Chang, Dong Yu

* 11 pages, 6 figures, Submitted to IEEE/ACM Transactions on Audio, Speech and Language Processing. arXiv admin note: text overlap with arXiv:1704.01985 

  Access Paper or Ask Questions

Recognizing Multi-talker Speech with Permutation Invariant Training


Jun 19, 2017
Dong Yu, Xuankai Chang, Yanmin Qian

* 5 pages, 6 figures, InterSpeech2017 

  Access Paper or Ask Questions