Alert button
Picture for Rohit Prabhavalkar

Rohit Prabhavalkar

Alert button

JOIST: A Joint Speech and Text Streaming Model For ASR

Add code
Bookmark button
Alert button
Oct 13, 2022
Tara N. Sainath, Rohit Prabhavalkar, Ankur Bapna, Yu Zhang, Zhouyuan Huo, Zhehuai Chen, Bo Li, Weiran Wang, Trevor Strohman

Figure 1 for JOIST: A Joint Speech and Text Streaming Model For ASR
Figure 2 for JOIST: A Joint Speech and Text Streaming Model For ASR
Figure 3 for JOIST: A Joint Speech and Text Streaming Model For ASR
Figure 4 for JOIST: A Joint Speech and Text Streaming Model For ASR
Viaarxiv icon

Improving Deliberation by Text-Only and Semi-Supervised Training

Add code
Bookmark button
Alert button
Jun 29, 2022
Ke Hu, Tara N. Sainath, Yanzhang He, Rohit Prabhavalkar, Trevor Strohman, Sepand Mavandadi, Weiran Wang

Figure 1 for Improving Deliberation by Text-Only and Semi-Supervised Training
Figure 2 for Improving Deliberation by Text-Only and Semi-Supervised Training
Figure 3 for Improving Deliberation by Text-Only and Semi-Supervised Training
Figure 4 for Improving Deliberation by Text-Only and Semi-Supervised Training
Viaarxiv icon

E2E Segmenter: Joint Segmenting and Decoding for Long-Form ASR

Add code
Bookmark button
Alert button
Apr 22, 2022
W. Ronny Huang, Shuo-yiin Chang, David Rybach, Rohit Prabhavalkar, Tara N. Sainath, Cyril Allauzen, Cal Peyser, Zhiyun Lu

Figure 1 for E2E Segmenter: Joint Segmenting and Decoding for Long-Form ASR
Figure 2 for E2E Segmenter: Joint Segmenting and Decoding for Long-Form ASR
Figure 3 for E2E Segmenter: Joint Segmenting and Decoding for Long-Form ASR
Figure 4 for E2E Segmenter: Joint Segmenting and Decoding for Long-Form ASR
Viaarxiv icon

A Unified Cascaded Encoder ASR Model for Dynamic Model Sizes

Add code
Bookmark button
Alert button
Apr 20, 2022
Shaojin Ding, Weiran Wang, Ding Zhao, Tara N. Sainath, Yanzhang He, Robert David, Rami Botros, Xin Wang, Rina Panigrahy, Qiao Liang, Dongseong Hwang, Ian McGraw, Rohit Prabhavalkar, Trevor Strohman

Figure 1 for A Unified Cascaded Encoder ASR Model for Dynamic Model Sizes
Figure 2 for A Unified Cascaded Encoder ASR Model for Dynamic Model Sizes
Figure 3 for A Unified Cascaded Encoder ASR Model for Dynamic Model Sizes
Figure 4 for A Unified Cascaded Encoder ASR Model for Dynamic Model Sizes
Viaarxiv icon

Improving Rare Word Recognition with LM-aware MWER Training

Add code
Bookmark button
Alert button
Apr 15, 2022
Weiran Wang, Tongzhou Chen, Tara N. Sainath, Ehsan Variani, Rohit Prabhavalkar, Ronny Huang, Bhuvana Ramabhadran, Neeraj Gaur, Sepand Mavandadi, Cal Peyser, Trevor Strohman, Yanzhang He, David Rybach

Figure 1 for Improving Rare Word Recognition with LM-aware MWER Training
Figure 2 for Improving Rare Word Recognition with LM-aware MWER Training
Figure 3 for Improving Rare Word Recognition with LM-aware MWER Training
Figure 4 for Improving Rare Word Recognition with LM-aware MWER Training
Viaarxiv icon

Neural-FST Class Language Model for End-to-End Speech Recognition

Add code
Bookmark button
Alert button
Jan 31, 2022
Antoine Bruguier, Duc Le, Rohit Prabhavalkar, Dangna Li, Zhe Liu, Bo Wang, Eun Chang, Fuchun Peng, Ozlem Kalinli, Michael L. Seltzer

Figure 1 for Neural-FST Class Language Model for End-to-End Speech Recognition
Figure 2 for Neural-FST Class Language Model for End-to-End Speech Recognition
Figure 3 for Neural-FST Class Language Model for End-to-End Speech Recognition
Viaarxiv icon

Input Length Matters: An Empirical Study Of RNN-T And MWER Training For Long-form Telephony Speech Recognition

Add code
Bookmark button
Alert button
Oct 08, 2021
Zhiyun Lu, Yanwei Pan, Thibault Doutre, Liangliang Cao, Rohit Prabhavalkar, Chao Zhang, Trevor Strohman

Figure 1 for Input Length Matters: An Empirical Study Of RNN-T And MWER Training For Long-form Telephony Speech Recognition
Figure 2 for Input Length Matters: An Empirical Study Of RNN-T And MWER Training For Long-form Telephony Speech Recognition
Figure 3 for Input Length Matters: An Empirical Study Of RNN-T And MWER Training For Long-form Telephony Speech Recognition
Figure 4 for Input Length Matters: An Empirical Study Of RNN-T And MWER Training For Long-form Telephony Speech Recognition
Viaarxiv icon

A Neural Acoustic Echo Canceller Optimized Using An Automatic Speech Recognizer And Large Scale Synthetic Data

Add code
Bookmark button
Alert button
Jun 01, 2021
Nathan Howard, Alex Park, Turaj Zakizadeh Shabestary, Alexander Gruenstein, Rohit Prabhavalkar

Figure 1 for A Neural Acoustic Echo Canceller Optimized Using An Automatic Speech Recognizer And Large Scale Synthetic Data
Figure 2 for A Neural Acoustic Echo Canceller Optimized Using An Automatic Speech Recognizer And Large Scale Synthetic Data
Figure 3 for A Neural Acoustic Echo Canceller Optimized Using An Automatic Speech Recognizer And Large Scale Synthetic Data
Figure 4 for A Neural Acoustic Echo Canceller Optimized Using An Automatic Speech Recognizer And Large Scale Synthetic Data
Viaarxiv icon

Dissecting User-Perceived Latency of On-Device E2E Speech Recognition

Add code
Bookmark button
Alert button
Apr 06, 2021
Yuan Shangguan, Rohit Prabhavalkar, Hang Su, Jay Mahadeokar, Yangyang Shi, Jiatong Zhou, Chunyang Wu, Duc Le, Ozlem Kalinli, Christian Fuegen, Michael L. Seltzer

Figure 1 for Dissecting User-Perceived Latency of On-Device E2E Speech Recognition
Figure 2 for Dissecting User-Perceived Latency of On-Device E2E Speech Recognition
Figure 3 for Dissecting User-Perceived Latency of On-Device E2E Speech Recognition
Figure 4 for Dissecting User-Perceived Latency of On-Device E2E Speech Recognition
Viaarxiv icon