Alert button
Picture for Kjell Schubert

Kjell Schubert

Alert button

Factorized Blank Thresholding for Improved Runtime Efficiency of Neural Transducers

Nov 02, 2022
Duc Le, Frank Seide, Yuhao Wang, Yang Li, Kjell Schubert, Ozlem Kalinli, Michael L. Seltzer

Figure 1 for Factorized Blank Thresholding for Improved Runtime Efficiency of Neural Transducers
Figure 2 for Factorized Blank Thresholding for Improved Runtime Efficiency of Neural Transducers
Figure 3 for Factorized Blank Thresholding for Improved Runtime Efficiency of Neural Transducers
Figure 4 for Factorized Blank Thresholding for Improved Runtime Efficiency of Neural Transducers
Viaarxiv icon

Improving Data Driven Inverse Text Normalization using Data Augmentation

Jul 20, 2022
Laxmi Pandey, Debjyoti Paul, Pooja Chitkara, Yutong Pang, Xuedong Zhang, Kjell Schubert, Mark Chou, Shu Liu, Yatharth Saraf

Figure 1 for Improving Data Driven Inverse Text Normalization using Data Augmentation
Figure 2 for Improving Data Driven Inverse Text Normalization using Data Augmentation
Figure 3 for Improving Data Driven Inverse Text Normalization using Data Augmentation
Figure 4 for Improving Data Driven Inverse Text Normalization using Data Augmentation
Viaarxiv icon

RNN-T For Latency Controlled ASR With Improved Beam Search

Nov 05, 2019
Mahaveer Jain, Kjell Schubert, Jay Mahadeokar, Ching-Feng Yeh, Kaustubh Kalgaonkar, Anuroop Sriram, Christian Fuegen, Michael L. Seltzer

Figure 1 for RNN-T For Latency Controlled ASR With Improved Beam Search
Figure 2 for RNN-T For Latency Controlled ASR With Improved Beam Search
Figure 3 for RNN-T For Latency Controlled ASR With Improved Beam Search
Figure 4 for RNN-T For Latency Controlled ASR With Improved Beam Search
Viaarxiv icon

Transformer-Transducer: End-to-End Speech Recognition with Self-Attention

Oct 28, 2019
Ching-Feng Yeh, Jay Mahadeokar, Kaustubh Kalgaonkar, Yongqiang Wang, Duc Le, Mahaveer Jain, Kjell Schubert, Christian Fuegen, Michael L. Seltzer

Figure 1 for Transformer-Transducer: End-to-End Speech Recognition with Self-Attention
Figure 2 for Transformer-Transducer: End-to-End Speech Recognition with Self-Attention
Figure 3 for Transformer-Transducer: End-to-End Speech Recognition with Self-Attention
Figure 4 for Transformer-Transducer: End-to-End Speech Recognition with Self-Attention
Viaarxiv icon