Alert button
Picture for Zhengzhe Yu

Zhengzhe Yu

Alert button

UCorrect: An Unsupervised Framework for Automatic Speech Recognition Error Correction

Add code
Bookmark button
Alert button
Jan 11, 2024
Jiaxin Guo, Minghan Wang, Xiaosong Qiao, Daimeng Wei, Hengchao Shang, Zongyao Li, Zhengzhe Yu, Yinglu Li, Chang Su, Min Zhang, Shimin Tao, Hao Yang

Viaarxiv icon

Text Style Transfer Back-Translation

Add code
Bookmark button
Alert button
Jun 02, 2023
Daimeng Wei, Zhanglin Wu, Hengchao Shang, Zongyao Li, Minghan Wang, Jiaxin Guo, Xiaoyu Chen, Zhengzhe Yu, Hao Yang

Figure 1 for Text Style Transfer Back-Translation
Figure 2 for Text Style Transfer Back-Translation
Figure 3 for Text Style Transfer Back-Translation
Figure 4 for Text Style Transfer Back-Translation
Viaarxiv icon

Joint-training on Symbiosis Networks for Deep Nueral Machine Translation models

Add code
Bookmark button
Alert button
Dec 22, 2021
Zhengzhe Yu, Jiaxin Guo, Minghan Wang, Daimeng Wei, Hengchao Shang, Zongyao Li, Zhanglin Wu, Yuxia Wang, Yimeng Chen, Chang Su, Min Zhang, Lizhi Lei, shimin tao, Hao Yang

Figure 1 for Joint-training on Symbiosis Networks for Deep Nueral Machine Translation models
Figure 2 for Joint-training on Symbiosis Networks for Deep Nueral Machine Translation models
Figure 3 for Joint-training on Symbiosis Networks for Deep Nueral Machine Translation models
Figure 4 for Joint-training on Symbiosis Networks for Deep Nueral Machine Translation models
Viaarxiv icon

Self-Distillation Mixup Training for Non-autoregressive Neural Machine Translation

Add code
Bookmark button
Alert button
Dec 22, 2021
Jiaxin Guo, Minghan Wang, Daimeng Wei, Hengchao Shang, Yuxia Wang, Zongyao Li, Zhengzhe Yu, Zhanglin Wu, Yimeng Chen, Chang Su, Min Zhang, Lizhi Lei, shimin tao, Hao Yang

Figure 1 for Self-Distillation Mixup Training for Non-autoregressive Neural Machine Translation
Figure 2 for Self-Distillation Mixup Training for Non-autoregressive Neural Machine Translation
Figure 3 for Self-Distillation Mixup Training for Non-autoregressive Neural Machine Translation
Figure 4 for Self-Distillation Mixup Training for Non-autoregressive Neural Machine Translation
Viaarxiv icon