Alert button
Picture for Tara N. Sainath

Tara N. Sainath

Alert button

Extreme Encoder Output Frame Rate Reduction: Improving Computational Latencies of Large End-to-End Models

Add code
Bookmark button
Alert button
Feb 27, 2024
Rohit Prabhavalkar, Zhong Meng, Weiran Wang, Adam Stooke, Xingyu Cai, Yanzhang He, Arun Narayanan, Dongseong Hwang, Tara N. Sainath, Pedro J. Moreno

Viaarxiv icon

Handling Ambiguity in Emotion: From Out-of-Domain Detection to Distribution Estimation

Add code
Bookmark button
Alert button
Feb 20, 2024
Wen Wu, Bo Li, Chao Zhang, Chung-Cheng Chiu, Qiujia Li, Junwen Bai, Tara N. Sainath, Philip C. Woodland

Viaarxiv icon

Multilingual and Fully Non-Autoregressive ASR with Large Language Model Fusion: A Comprehensive Study

Add code
Bookmark button
Alert button
Jan 23, 2024
W. Ronny Huang, Cyril Allauzen, Tongzhou Chen, Kilol Gupta, Ke Hu, James Qin, Yu Zhang, Yongqiang Wang, Shuo-Yiin Chang, Tara N. Sainath

Viaarxiv icon

Efficient Adapter Finetuning for Tail Languages in Streaming Multilingual ASR

Add code
Bookmark button
Alert button
Jan 17, 2024
Junwen Bai, Bo Li, Qiujia Li, Tara N. Sainath, Trevor Strohman

Viaarxiv icon

USM-Lite: Quantization and Sparsity Aware Fine-tuning for Speech Recognition with Universal Speech Models

Add code
Bookmark button
Alert button
Jan 03, 2024
Shaojin Ding, David Qiu, David Rim, Yanzhang He, Oleg Rybakov, Bo Li, Rohit Prabhavalkar, Weiran Wang, Tara N. Sainath, Shivani Agrawal, Zhonglin Han, Jian Li, Amir Yazdanbakhsh

Figure 1 for USM-Lite: Quantization and Sparsity Aware Fine-tuning for Speech Recognition with Universal Speech Models
Figure 2 for USM-Lite: Quantization and Sparsity Aware Fine-tuning for Speech Recognition with Universal Speech Models
Figure 3 for USM-Lite: Quantization and Sparsity Aware Fine-tuning for Speech Recognition with Universal Speech Models
Figure 4 for USM-Lite: Quantization and Sparsity Aware Fine-tuning for Speech Recognition with Universal Speech Models
Viaarxiv icon

Improved Long-Form Speech Recognition by Jointly Modeling the Primary and Non-primary Speakers

Add code
Bookmark button
Alert button
Dec 18, 2023
Guru Prakash Arumugam, Shuo-yiin Chang, Tara N. Sainath, Rohit Prabhavalkar, Quan Wang, Shaan Bijwadia

Viaarxiv icon

Text Injection for Capitalization and Turn-Taking Prediction in Speech Models

Add code
Bookmark button
Alert button
Aug 14, 2023
Shaan Bijwadia, Shuo-yiin Chang, Weiran Wang, Zhong Meng, Hao Zhang, Tara N. Sainath

Figure 1 for Text Injection for Capitalization and Turn-Taking Prediction in Speech Models
Figure 2 for Text Injection for Capitalization and Turn-Taking Prediction in Speech Models
Figure 3 for Text Injection for Capitalization and Turn-Taking Prediction in Speech Models
Figure 4 for Text Injection for Capitalization and Turn-Taking Prediction in Speech Models
Viaarxiv icon

Improving Joint Speech-Text Representations Without Alignment

Add code
Bookmark button
Alert button
Aug 11, 2023
Cal Peyser, Zhong Meng, Ke Hu, Rohit Prabhavalkar, Andrew Rosenberg, Tara N. Sainath, Michael Picheny, Kyunghyun Cho

Figure 1 for Improving Joint Speech-Text Representations Without Alignment
Figure 2 for Improving Joint Speech-Text Representations Without Alignment
Figure 3 for Improving Joint Speech-Text Representations Without Alignment
Figure 4 for Improving Joint Speech-Text Representations Without Alignment
Viaarxiv icon

How to Estimate Model Transferability of Pre-Trained Speech Models?

Add code
Bookmark button
Alert button
Jun 01, 2023
Zih-Ching Chen, Chao-Han Huck Yang, Bo Li, Yu Zhang, Nanxin Chen, Shou-Yiin Chang, Rohit Prabhavalkar, Hung-yi Lee, Tara N. Sainath

Figure 1 for How to Estimate Model Transferability of Pre-Trained Speech Models?
Figure 2 for How to Estimate Model Transferability of Pre-Trained Speech Models?
Figure 3 for How to Estimate Model Transferability of Pre-Trained Speech Models?
Figure 4 for How to Estimate Model Transferability of Pre-Trained Speech Models?
Viaarxiv icon