Alert button
Picture for Shujie Liu

Shujie Liu

Alert button

The YiTrans End-to-End Speech Translation System for IWSLT 2022 Offline Shared Task

Add code
Bookmark button
Alert button
Jun 14, 2022
Ziqiang Zhang, Junyi Ao, Long Zhou, Shujie Liu, Furu Wei, Jinyu Li

Figure 1 for The YiTrans End-to-End Speech Translation System for IWSLT 2022 Offline Shared Task
Figure 2 for The YiTrans End-to-End Speech Translation System for IWSLT 2022 Offline Shared Task
Figure 3 for The YiTrans End-to-End Speech Translation System for IWSLT 2022 Offline Shared Task
Figure 4 for The YiTrans End-to-End Speech Translation System for IWSLT 2022 Offline Shared Task
Viaarxiv icon

Ultra Fast Speech Separation Model with Teacher Student Learning

Add code
Bookmark button
Alert button
Apr 27, 2022
Sanyuan Chen, Yu Wu, Zhuo Chen, Jian Wu, Takuya Yoshioka, Shujie Liu, Jinyu Li, Xiangzhan Yu

Figure 1 for Ultra Fast Speech Separation Model with Teacher Student Learning
Figure 2 for Ultra Fast Speech Separation Model with Teacher Student Learning
Figure 3 for Ultra Fast Speech Separation Model with Teacher Student Learning
Viaarxiv icon

Why does Self-Supervised Learning for Speech Recognition Benefit Speaker Recognition?

Add code
Bookmark button
Alert button
Apr 27, 2022
Sanyuan Chen, Yu Wu, Chengyi Wang, Shujie Liu, Zhuo Chen, Peidong Wang, Gang Liu, Jinyu Li, Jian Wu, Xiangzhan Yu, Furu Wei

Figure 1 for Why does Self-Supervised Learning for Speech Recognition Benefit Speaker Recognition?
Figure 2 for Why does Self-Supervised Learning for Speech Recognition Benefit Speaker Recognition?
Figure 3 for Why does Self-Supervised Learning for Speech Recognition Benefit Speaker Recognition?
Figure 4 for Why does Self-Supervised Learning for Speech Recognition Benefit Speaker Recognition?
Viaarxiv icon

Speech Pre-training with Acoustic Piece

Add code
Bookmark button
Alert button
Apr 07, 2022
Shuo Ren, Shujie Liu, Yu Wu, Long Zhou, Furu Wei

Figure 1 for Speech Pre-training with Acoustic Piece
Figure 2 for Speech Pre-training with Acoustic Piece
Figure 3 for Speech Pre-training with Acoustic Piece
Figure 4 for Speech Pre-training with Acoustic Piece
Viaarxiv icon

Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Speech Data

Add code
Bookmark button
Alert button
Mar 31, 2022
Junyi Ao, Ziqiang Zhang, Long Zhou, Shujie Liu, Haizhou Li, Tom Ko, Lirong Dai, Jinyu Li, Yao Qian, Furu Wei

Figure 1 for Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Speech Data
Figure 2 for Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Speech Data
Figure 3 for Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Speech Data
Figure 4 for Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Speech Data
Viaarxiv icon

Self-Supervised Learning for speech recognition with Intermediate layer supervision

Add code
Bookmark button
Alert button
Dec 16, 2021
Chengyi Wang, Yu Wu, Sanyuan Chen, Shujie Liu, Jinyu Li, Yao Qian, Zhenglu Yang

Figure 1 for Self-Supervised Learning for speech recognition with Intermediate layer supervision
Figure 2 for Self-Supervised Learning for speech recognition with Intermediate layer supervision
Figure 3 for Self-Supervised Learning for speech recognition with Intermediate layer supervision
Figure 4 for Self-Supervised Learning for speech recognition with Intermediate layer supervision
Viaarxiv icon

Separating Long-Form Speech with Group-Wise Permutation Invariant Training

Add code
Bookmark button
Alert button
Nov 17, 2021
Wangyou Zhang, Zhuo Chen, Naoyuki Kanda, Shujie Liu, Jinyu Li, Sefik Emre Eskimez, Takuya Yoshioka, Xiong Xiao, Zhong Meng, Yanmin Qian, Furu Wei

Figure 1 for Separating Long-Form Speech with Group-Wise Permutation Invariant Training
Figure 2 for Separating Long-Form Speech with Group-Wise Permutation Invariant Training
Figure 3 for Separating Long-Form Speech with Group-Wise Permutation Invariant Training
Figure 4 for Separating Long-Form Speech with Group-Wise Permutation Invariant Training
Viaarxiv icon

WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing

Add code
Bookmark button
Alert button
Oct 29, 2021
Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei

Figure 1 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Figure 2 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Figure 3 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Figure 4 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Viaarxiv icon

Improving Noise Robustness of Contrastive Speech Representation Learning with Speech Reconstruction

Add code
Bookmark button
Alert button
Oct 28, 2021
Heming Wang, Yao Qian, Xiaofei Wang, Yiming Wang, Chengyi Wang, Shujie Liu, Takuya Yoshioka, Jinyu Li, DeLiang Wang

Figure 1 for Improving Noise Robustness of Contrastive Speech Representation Learning with Speech Reconstruction
Figure 2 for Improving Noise Robustness of Contrastive Speech Representation Learning with Speech Reconstruction
Figure 3 for Improving Noise Robustness of Contrastive Speech Representation Learning with Speech Reconstruction
Figure 4 for Improving Noise Robustness of Contrastive Speech Representation Learning with Speech Reconstruction
Viaarxiv icon