Alert button
Picture for Zhengkun Tian

Zhengkun Tian

Alert button

Continual Learning for Fake Audio Detection

Add code
Bookmark button
Alert button
Apr 15, 2021
Haoxin Ma, Jiangyan Yi, Jianhua Tao, Ye Bai, Zhengkun Tian, Chenglong Wang

Figure 1 for Continual Learning for Fake Audio Detection
Figure 2 for Continual Learning for Fake Audio Detection
Figure 3 for Continual Learning for Fake Audio Detection
Figure 4 for Continual Learning for Fake Audio Detection
Viaarxiv icon

Half-Truth: A Partially Fake Audio Detection Dataset

Add code
Bookmark button
Alert button
Apr 08, 2021
Jiangyan Yi, Ye Bai, Jianhua Tao, Zhengkun Tian, Chenglong Wang, Tao Wang, Ruibo Fu

Figure 1 for Half-Truth: A Partially Fake Audio Detection Dataset
Figure 2 for Half-Truth: A Partially Fake Audio Detection Dataset
Figure 3 for Half-Truth: A Partially Fake Audio Detection Dataset
Figure 4 for Half-Truth: A Partially Fake Audio Detection Dataset
Viaarxiv icon

FSR: Accelerating the Inference Process of Transducer-Based Models by Applying Fast-Skip Regularization

Add code
Bookmark button
Alert button
Apr 07, 2021
Zhengkun Tian, Jiangyan Yi, Ye Bai, Jianhua Tao, Shuai Zhang, Zhengqi Wen

Figure 1 for FSR: Accelerating the Inference Process of Transducer-Based Models by Applying Fast-Skip Regularization
Figure 2 for FSR: Accelerating the Inference Process of Transducer-Based Models by Applying Fast-Skip Regularization
Figure 3 for FSR: Accelerating the Inference Process of Transducer-Based Models by Applying Fast-Skip Regularization
Figure 4 for FSR: Accelerating the Inference Process of Transducer-Based Models by Applying Fast-Skip Regularization
Viaarxiv icon

TSNAT: Two-Step Non-Autoregressvie Transformer Models for Speech Recognition

Add code
Bookmark button
Alert button
Apr 04, 2021
Zhengkun Tian, Jiangyan Yi, Jianhua Tao, Ye Bai, Shuai Zhang, Zhengqi Wen, Xuefei Liu

Figure 1 for TSNAT: Two-Step Non-Autoregressvie Transformer Models for Speech Recognition
Figure 2 for TSNAT: Two-Step Non-Autoregressvie Transformer Models for Speech Recognition
Figure 3 for TSNAT: Two-Step Non-Autoregressvie Transformer Models for Speech Recognition
Figure 4 for TSNAT: Two-Step Non-Autoregressvie Transformer Models for Speech Recognition
Viaarxiv icon

Fast End-to-End Speech Recognition via a Non-Autoregressive Model and Cross-Modal Knowledge Transferring from BERT

Add code
Bookmark button
Alert button
Feb 20, 2021
Ye Bai, Jiangyan Yi, Jianhua Tao, Zhengkun Tian, Zhengqi Wen, Shuai Zhang

Figure 1 for Fast End-to-End Speech Recognition via a Non-Autoregressive Model and Cross-Modal Knowledge Transferring from BERT
Figure 2 for Fast End-to-End Speech Recognition via a Non-Autoregressive Model and Cross-Modal Knowledge Transferring from BERT
Figure 3 for Fast End-to-End Speech Recognition via a Non-Autoregressive Model and Cross-Modal Knowledge Transferring from BERT
Figure 4 for Fast End-to-End Speech Recognition via a Non-Autoregressive Model and Cross-Modal Knowledge Transferring from BERT
Viaarxiv icon

Fast End-to-End Speech Recognition via Non-Autoregressive Models and Cross-Modal Knowledge Transferring from BERT

Add code
Bookmark button
Alert button
Feb 15, 2021
Ye Bai, Jiangyan Yi, Jianhua Tao, Zhengkun Tian, Zhengqi Wen, Shuai Zhang

Figure 1 for Fast End-to-End Speech Recognition via Non-Autoregressive Models and Cross-Modal Knowledge Transferring from BERT
Figure 2 for Fast End-to-End Speech Recognition via Non-Autoregressive Models and Cross-Modal Knowledge Transferring from BERT
Figure 3 for Fast End-to-End Speech Recognition via Non-Autoregressive Models and Cross-Modal Knowledge Transferring from BERT
Figure 4 for Fast End-to-End Speech Recognition via Non-Autoregressive Models and Cross-Modal Knowledge Transferring from BERT
Viaarxiv icon

Gated Recurrent Fusion with Joint Training Framework for Robust End-to-End Speech Recognition

Add code
Bookmark button
Alert button
Nov 09, 2020
Cunhang Fan, Jiangyan Yi, Jianhua Tao, Zhengkun Tian, Bin Liu, Zhengqi Wen

Figure 1 for Gated Recurrent Fusion with Joint Training Framework for Robust End-to-End Speech Recognition
Figure 2 for Gated Recurrent Fusion with Joint Training Framework for Robust End-to-End Speech Recognition
Figure 3 for Gated Recurrent Fusion with Joint Training Framework for Robust End-to-End Speech Recognition
Figure 4 for Gated Recurrent Fusion with Joint Training Framework for Robust End-to-End Speech Recognition
Viaarxiv icon

Decoupling Pronunciation and Language for End-to-end Code-switching Automatic Speech Recognition

Add code
Bookmark button
Alert button
Oct 28, 2020
Shuai Zhang, Jiangyan Yi, Zhengkun Tian, Ye Bai, Jianhua Tao, Zhengqi wen

Figure 1 for Decoupling Pronunciation and Language for End-to-end Code-switching Automatic Speech Recognition
Figure 2 for Decoupling Pronunciation and Language for End-to-end Code-switching Automatic Speech Recognition
Figure 3 for Decoupling Pronunciation and Language for End-to-end Code-switching Automatic Speech Recognition
Figure 4 for Decoupling Pronunciation and Language for End-to-end Code-switching Automatic Speech Recognition
Viaarxiv icon

Listen Attentively, and Spell Once: Whole Sentence Generation via a Non-Autoregressive Architecture for Low-Latency Speech Recognition

Add code
Bookmark button
Alert button
May 30, 2020
Ye Bai, Jiangyan Yi, Jianhua Tao, Zhengkun Tian, Zhengqi Wen, Shuai Zhang

Figure 1 for Listen Attentively, and Spell Once: Whole Sentence Generation via a Non-Autoregressive Architecture for Low-Latency Speech Recognition
Figure 2 for Listen Attentively, and Spell Once: Whole Sentence Generation via a Non-Autoregressive Architecture for Low-Latency Speech Recognition
Figure 3 for Listen Attentively, and Spell Once: Whole Sentence Generation via a Non-Autoregressive Architecture for Low-Latency Speech Recognition
Figure 4 for Listen Attentively, and Spell Once: Whole Sentence Generation via a Non-Autoregressive Architecture for Low-Latency Speech Recognition
Viaarxiv icon