Alert button
Picture for Shujie Liu

Shujie Liu

Alert button

Bridging the Gap between Pre-Training and Fine-Tuning for End-to-End Speech Translation

Add code
Bookmark button
Alert button
Sep 19, 2019
Chengyi Wang, Yu Wu, Shujie Liu, Zhenglu Yang, Ming Zhou

Figure 1 for Bridging the Gap between Pre-Training and Fine-Tuning for End-to-End Speech Translation
Figure 2 for Bridging the Gap between Pre-Training and Fine-Tuning for End-to-End Speech Translation
Figure 3 for Bridging the Gap between Pre-Training and Fine-Tuning for End-to-End Speech Translation
Figure 4 for Bridging the Gap between Pre-Training and Fine-Tuning for End-to-End Speech Translation
Viaarxiv icon

Accelerating Transformer Decoding via a Hybrid of Self-attention and Recurrent Neural Network

Add code
Bookmark button
Alert button
Sep 05, 2019
Chengyi Wang, Shuangzhi Wu, Shujie Liu

Figure 1 for Accelerating Transformer Decoding via a Hybrid of Self-attention and Recurrent Neural Network
Figure 2 for Accelerating Transformer Decoding via a Hybrid of Self-attention and Recurrent Neural Network
Figure 3 for Accelerating Transformer Decoding via a Hybrid of Self-attention and Recurrent Neural Network
Figure 4 for Accelerating Transformer Decoding via a Hybrid of Self-attention and Recurrent Neural Network
Viaarxiv icon

Source Dependency-Aware Transformer with Supervised Self-Attention

Add code
Bookmark button
Alert button
Sep 05, 2019
Chengyi Wang, Shuangzhi Wu, Shujie Liu

Figure 1 for Source Dependency-Aware Transformer with Supervised Self-Attention
Figure 2 for Source Dependency-Aware Transformer with Supervised Self-Attention
Figure 3 for Source Dependency-Aware Transformer with Supervised Self-Attention
Figure 4 for Source Dependency-Aware Transformer with Supervised Self-Attention
Viaarxiv icon

Explicit Cross-lingual Pre-training for Unsupervised Machine Translation

Add code
Bookmark button
Alert button
Aug 31, 2019
Shuo Ren, Yu Wu, Shujie Liu, Ming Zhou, Shuai Ma

Figure 1 for Explicit Cross-lingual Pre-training for Unsupervised Machine Translation
Figure 2 for Explicit Cross-lingual Pre-training for Unsupervised Machine Translation
Figure 3 for Explicit Cross-lingual Pre-training for Unsupervised Machine Translation
Figure 4 for Explicit Cross-lingual Pre-training for Unsupervised Machine Translation
Viaarxiv icon

Unsupervised Neural Machine Translation with SMT as Posterior Regularization

Add code
Bookmark button
Alert button
Jan 14, 2019
Shuo Ren, Zhirui Zhang, Shujie Liu, Ming Zhou, Shuai Ma

Figure 1 for Unsupervised Neural Machine Translation with SMT as Posterior Regularization
Figure 2 for Unsupervised Neural Machine Translation with SMT as Posterior Regularization
Figure 3 for Unsupervised Neural Machine Translation with SMT as Posterior Regularization
Figure 4 for Unsupervised Neural Machine Translation with SMT as Posterior Regularization
Viaarxiv icon

Close to Human Quality TTS with Transformer

Add code
Bookmark button
Alert button
Sep 19, 2018
Naihan Li, Shujie Liu, Yanqing Liu, Sheng Zhao, Ming Liu, Ming Zhou

Figure 1 for Close to Human Quality TTS with Transformer
Figure 2 for Close to Human Quality TTS with Transformer
Figure 3 for Close to Human Quality TTS with Transformer
Figure 4 for Close to Human Quality TTS with Transformer
Viaarxiv icon

Approximate Distribution Matching for Sequence-to-Sequence Learning

Add code
Bookmark button
Alert button
Sep 02, 2018
Wenhu Chen, Guanlin Li, Shujie Liu, Zhirui Zhang, Mu Li, Ming Zhou

Figure 1 for Approximate Distribution Matching for Sequence-to-Sequence Learning
Figure 2 for Approximate Distribution Matching for Sequence-to-Sequence Learning
Figure 3 for Approximate Distribution Matching for Sequence-to-Sequence Learning
Figure 4 for Approximate Distribution Matching for Sequence-to-Sequence Learning
Viaarxiv icon

Style Transfer as Unsupervised Machine Translation

Add code
Bookmark button
Alert button
Aug 23, 2018
Zhirui Zhang, Shuo Ren, Shujie Liu, Jianyong Wang, Peng Chen, Mu Li, Ming Zhou, Enhong Chen

Figure 1 for Style Transfer as Unsupervised Machine Translation
Figure 2 for Style Transfer as Unsupervised Machine Translation
Figure 3 for Style Transfer as Unsupervised Machine Translation
Figure 4 for Style Transfer as Unsupervised Machine Translation
Viaarxiv icon

Regularizing Neural Machine Translation by Target-bidirectional Agreement

Add code
Bookmark button
Alert button
Aug 13, 2018
Zhirui Zhang, Shuangzhi Wu, Shujie Liu, Mu Li, Ming Zhou, Enhong Chen

Figure 1 for Regularizing Neural Machine Translation by Target-bidirectional Agreement
Figure 2 for Regularizing Neural Machine Translation by Target-bidirectional Agreement
Figure 3 for Regularizing Neural Machine Translation by Target-bidirectional Agreement
Figure 4 for Regularizing Neural Machine Translation by Target-bidirectional Agreement
Viaarxiv icon

Triangular Architecture for Rare Language Translation

Add code
Bookmark button
Alert button
Jul 11, 2018
Shuo Ren, Wenhu Chen, Shujie Liu, Mu Li, Ming Zhou, Shuai Ma

Figure 1 for Triangular Architecture for Rare Language Translation
Figure 2 for Triangular Architecture for Rare Language Translation
Figure 3 for Triangular Architecture for Rare Language Translation
Figure 4 for Triangular Architecture for Rare Language Translation
Viaarxiv icon