Alert button
Picture for Erik McDermott

Erik McDermott

Alert button

Neural Transducer Training: Reduced Memory Consumption with Sample-wise Computation

Add code
Bookmark button
Alert button
Nov 29, 2022
Stefan Braun, Erik McDermott, Roger Hsiao

Figure 1 for Neural Transducer Training: Reduced Memory Consumption with Sample-wise Computation
Figure 2 for Neural Transducer Training: Reduced Memory Consumption with Sample-wise Computation
Figure 3 for Neural Transducer Training: Reduced Memory Consumption with Sample-wise Computation
Figure 4 for Neural Transducer Training: Reduced Memory Consumption with Sample-wise Computation
Viaarxiv icon

Variable Attention Masking for Configurable Transformer Transducer Speech Recognition

Add code
Bookmark button
Alert button
Nov 02, 2022
Pawel Swietojanski, Stefan Braun, Dogan Can, Thiago Fraga da Silva, Arnab Ghoshal, Takaaki Hori, Roger Hsiao, Henry Mason, Erik McDermott, Honza Silovsky, Ruchir Travadi, Xiaodan Zhuang

Figure 1 for Variable Attention Masking for Configurable Transformer Transducer Speech Recognition
Figure 2 for Variable Attention Masking for Configurable Transformer Transducer Speech Recognition
Figure 3 for Variable Attention Masking for Configurable Transformer Transducer Speech Recognition
Figure 4 for Variable Attention Masking for Configurable Transformer Transducer Speech Recognition
Viaarxiv icon

A Density Ratio Approach to Language Model Fusion in End-To-End Automatic Speech Recognition

Add code
Bookmark button
Alert button
Feb 28, 2020
Erik McDermott, Hasim Sak, Ehsan Variani

Figure 1 for A Density Ratio Approach to Language Model Fusion in End-To-End Automatic Speech Recognition
Figure 2 for A Density Ratio Approach to Language Model Fusion in End-To-End Automatic Speech Recognition
Figure 3 for A Density Ratio Approach to Language Model Fusion in End-To-End Automatic Speech Recognition
Figure 4 for A Density Ratio Approach to Language Model Fusion in End-To-End Automatic Speech Recognition
Viaarxiv icon

Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T Loss

Add code
Bookmark button
Alert button
Feb 14, 2020
Qian Zhang, Han Lu, Hasim Sak, Anshuman Tripathi, Erik McDermott, Stephen Koo, Shankar Kumar

Figure 1 for Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T Loss
Figure 2 for Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T Loss
Figure 3 for Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T Loss
Figure 4 for Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T Loss
Viaarxiv icon