Alert button
Picture for Jinyu Li

Jinyu Li

Alert button

WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing

Add code
Bookmark button
Alert button
Oct 29, 2021
Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei

Figure 1 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Figure 2 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Figure 3 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Figure 4 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Viaarxiv icon

Improving Noise Robustness of Contrastive Speech Representation Learning with Speech Reconstruction

Add code
Bookmark button
Alert button
Oct 28, 2021
Heming Wang, Yao Qian, Xiaofei Wang, Yiming Wang, Chengyi Wang, Shujie Liu, Takuya Yoshioka, Jinyu Li, DeLiang Wang

Figure 1 for Improving Noise Robustness of Contrastive Speech Representation Learning with Speech Reconstruction
Figure 2 for Improving Noise Robustness of Contrastive Speech Representation Learning with Speech Reconstruction
Figure 3 for Improving Noise Robustness of Contrastive Speech Representation Learning with Speech Reconstruction
Figure 4 for Improving Noise Robustness of Contrastive Speech Representation Learning with Speech Reconstruction
Viaarxiv icon

Continuous Speech Separation with Recurrent Selective Attention Network

Add code
Bookmark button
Alert button
Oct 28, 2021
Yixuan Zhang, Zhuo Chen, Jian Wu, Takuya Yoshioka, Peidong Wang, Zhong Meng, Jinyu Li

Figure 1 for Continuous Speech Separation with Recurrent Selective Attention Network
Figure 2 for Continuous Speech Separation with Recurrent Selective Attention Network
Figure 3 for Continuous Speech Separation with Recurrent Selective Attention Network
Figure 4 for Continuous Speech Separation with Recurrent Selective Attention Network
Viaarxiv icon

Separating Long-Form Speech with Group-Wise Permutation Invariant Training

Add code
Bookmark button
Alert button
Oct 27, 2021
Wangyou Zhang, Zhuo Chen, Naoyuki Kanda, Shujie Liu, Jinyu Li, Sefik Emre Eskimez, Takuya Yoshioka, Xiong Xiao, Zhong Meng, Yanmin Qian, Furu Wei

Figure 1 for Separating Long-Form Speech with Group-Wise Permutation Invariant Training
Figure 2 for Separating Long-Form Speech with Group-Wise Permutation Invariant Training
Figure 3 for Separating Long-Form Speech with Group-Wise Permutation Invariant Training
Figure 4 for Separating Long-Form Speech with Group-Wise Permutation Invariant Training
Viaarxiv icon

Factorized Neural Transducer for Efficient Language Model Adaptation

Add code
Bookmark button
Alert button
Oct 18, 2021
Xie Chen, Zhong Meng, Sarangarajan Parthasarathy, Jinyu Li

Figure 1 for Factorized Neural Transducer for Efficient Language Model Adaptation
Figure 2 for Factorized Neural Transducer for Efficient Language Model Adaptation
Figure 3 for Factorized Neural Transducer for Efficient Language Model Adaptation
Figure 4 for Factorized Neural Transducer for Efficient Language Model Adaptation
Viaarxiv icon

Internal Language Model Adaptation with Text-Only Data for End-to-End Speech Recognition

Add code
Bookmark button
Alert button
Oct 14, 2021
Zhong Meng, Yashesh Gaur, Naoyuki Kanda, Jinyu Li, Xie Chen, Yu Wu, Yifan Gong

Figure 1 for Internal Language Model Adaptation with Text-Only Data for End-to-End Speech Recognition
Figure 2 for Internal Language Model Adaptation with Text-Only Data for End-to-End Speech Recognition
Viaarxiv icon

SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing

Add code
Bookmark button
Alert button
Oct 14, 2021
Junyi Ao, Rui Wang, Long Zhou, Shujie Liu, Shuo Ren, Yu Wu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei

Figure 1 for SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing
Figure 2 for SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing
Figure 3 for SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing
Figure 4 for SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing
Viaarxiv icon

UniSpeech-SAT: Universal Speech Representation Learning with Speaker Aware Pre-Training

Add code
Bookmark button
Alert button
Oct 12, 2021
Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu

Figure 1 for UniSpeech-SAT: Universal Speech Representation Learning with Speaker Aware Pre-Training
Figure 2 for UniSpeech-SAT: Universal Speech Representation Learning with Speaker Aware Pre-Training
Figure 3 for UniSpeech-SAT: Universal Speech Representation Learning with Speaker Aware Pre-Training
Figure 4 for UniSpeech-SAT: Universal Speech Representation Learning with Speaker Aware Pre-Training
Viaarxiv icon

Wav2vec-Switch: Contrastive Learning from Original-noisy Speech Pairs for Robust Speech Recognition

Add code
Bookmark button
Alert button
Oct 11, 2021
Yiming Wang, Jinyu Li, Heming Wang, Yao Qian, Chengyi Wang, Yu Wu

Figure 1 for Wav2vec-Switch: Contrastive Learning from Original-noisy Speech Pairs for Robust Speech Recognition
Figure 2 for Wav2vec-Switch: Contrastive Learning from Original-noisy Speech Pairs for Robust Speech Recognition
Figure 3 for Wav2vec-Switch: Contrastive Learning from Original-noisy Speech Pairs for Robust Speech Recognition
Figure 4 for Wav2vec-Switch: Contrastive Learning from Original-noisy Speech Pairs for Robust Speech Recognition
Viaarxiv icon

Have best of both worlds: two-pass hybrid and E2E cascading framework for speech recognition

Add code
Bookmark button
Alert button
Oct 10, 2021
Guoli Ye, Vadim Mazalov, Jinyu Li, Yifan Gong

Figure 1 for Have best of both worlds: two-pass hybrid and E2E cascading framework for speech recognition
Figure 2 for Have best of both worlds: two-pass hybrid and E2E cascading framework for speech recognition
Figure 3 for Have best of both worlds: two-pass hybrid and E2E cascading framework for speech recognition
Figure 4 for Have best of both worlds: two-pass hybrid and E2E cascading framework for speech recognition
Viaarxiv icon