Alert button
Picture for Raden Mu'az Mun'im

Raden Mu'az Mun'im

Alert button

Sequence-Level Knowledge Distillation for Model Compression of Attention-based Sequence-to-Sequence Speech Recognition

Add code
Bookmark button
Alert button
Nov 12, 2018
Raden Mu'az Mun'im, Nakamasa Inoue, Koichi Shinoda

Figure 1 for Sequence-Level Knowledge Distillation for Model Compression of Attention-based Sequence-to-Sequence Speech Recognition
Figure 2 for Sequence-Level Knowledge Distillation for Model Compression of Attention-based Sequence-to-Sequence Speech Recognition
Figure 3 for Sequence-Level Knowledge Distillation for Model Compression of Attention-based Sequence-to-Sequence Speech Recognition
Figure 4 for Sequence-Level Knowledge Distillation for Model Compression of Attention-based Sequence-to-Sequence Speech Recognition
Viaarxiv icon