Alert button
Picture for Tara N. Sainath

Tara N. Sainath

Alert button

Text Injection for Capitalization and Turn-Taking Prediction in Speech Models

Aug 14, 2023
Shaan Bijwadia, Shuo-yiin Chang, Weiran Wang, Zhong Meng, Hao Zhang, Tara N. Sainath

Figure 1 for Text Injection for Capitalization and Turn-Taking Prediction in Speech Models
Figure 2 for Text Injection for Capitalization and Turn-Taking Prediction in Speech Models
Figure 3 for Text Injection for Capitalization and Turn-Taking Prediction in Speech Models
Figure 4 for Text Injection for Capitalization and Turn-Taking Prediction in Speech Models

Text injection for automatic speech recognition (ASR), wherein unpaired text-only data is used to supplement paired audio-text data, has shown promising improvements for word error rate. This study examines the use of text injection for auxiliary tasks, which are the non-ASR tasks often performed by an E2E model. In this work, we use joint end-to-end and internal language model training (JEIT) as our text injection algorithm to train an ASR model which performs two auxiliary tasks. The first is capitalization, which is a de-normalization task. The second is turn-taking prediction, which attempts to identify whether a user has completed their conversation turn in a digital assistant interaction. We show results demonstrating that our text injection method boosts capitalization performance for long-tail data, and improves turn-taking detection recall.

Viaarxiv icon

Improving Joint Speech-Text Representations Without Alignment

Aug 11, 2023
Cal Peyser, Zhong Meng, Ke Hu, Rohit Prabhavalkar, Andrew Rosenberg, Tara N. Sainath, Michael Picheny, Kyunghyun Cho

Figure 1 for Improving Joint Speech-Text Representations Without Alignment
Figure 2 for Improving Joint Speech-Text Representations Without Alignment
Figure 3 for Improving Joint Speech-Text Representations Without Alignment
Figure 4 for Improving Joint Speech-Text Representations Without Alignment

The last year has seen astonishing progress in text-prompted image generation premised on the idea of a cross-modal representation space in which the text and image domains are represented jointly. In ASR, this idea has found application as joint speech-text encoders that can scale to the capacities of very large parameter models by being trained on both unpaired speech and text. While these methods show promise, they have required special treatment of the sequence-length mismatch inherent in speech and text, either by up-sampling heuristics or an explicit alignment model. In this work, we offer evidence that joint speech-text encoders naturally achieve consistent representations across modalities by disregarding sequence length, and argue that consistency losses could forgive length differences and simply assume the best alignment. We show that such a loss improves downstream WER in both a large-parameter monolingual and multilingual system.

* INTERSPEECH 2023  
Viaarxiv icon

How to Estimate Model Transferability of Pre-Trained Speech Models?

Jun 01, 2023
Zih-Ching Chen, Chao-Han Huck Yang, Bo Li, Yu Zhang, Nanxin Chen, Shou-Yiin Chang, Rohit Prabhavalkar, Hung-yi Lee, Tara N. Sainath

Figure 1 for How to Estimate Model Transferability of Pre-Trained Speech Models?
Figure 2 for How to Estimate Model Transferability of Pre-Trained Speech Models?
Figure 3 for How to Estimate Model Transferability of Pre-Trained Speech Models?
Figure 4 for How to Estimate Model Transferability of Pre-Trained Speech Models?

In this work, we introduce a ``score-based assessment'' framework for estimating the transferability of pre-trained speech models (PSMs) for fine-tuning target tasks. We leverage upon two representation theories, Bayesian likelihood estimation and optimal transport, to generate rank scores for the PSM candidates using the extracted representations. Our framework efficiently computes transferability scores without actual fine-tuning of candidate models or layers by making a temporal independent hypothesis. We evaluate some popular supervised speech models (e.g., Conformer RNN-Transducer) and self-supervised speech models (e.g., HuBERT) in cross-layer and cross-model settings using public data. Experimental results show a high Spearman's rank correlation and low $p$-value between our estimation framework and fine-tuning ground truth. Our proposed transferability framework requires less computational time and resources, making it a resource-saving and time-efficient approach for tuning speech foundation models.

* Accepted to Interspeech. Code will be released 
Viaarxiv icon

Semantic Segmentation with Bidirectional Language Models Improves Long-form ASR

May 28, 2023
W. Ronny Huang, Hao Zhang, Shankar Kumar, Shuo-yiin Chang, Tara N. Sainath

Figure 1 for Semantic Segmentation with Bidirectional Language Models Improves Long-form ASR
Figure 2 for Semantic Segmentation with Bidirectional Language Models Improves Long-form ASR
Figure 3 for Semantic Segmentation with Bidirectional Language Models Improves Long-form ASR
Figure 4 for Semantic Segmentation with Bidirectional Language Models Improves Long-form ASR

We propose a method of segmenting long-form speech by separating semantically complete sentences within the utterance. This prevents the ASR decoder from needlessly processing faraway context while also preventing it from missing relevant context within the current sentence. Semantically complete sentence boundaries are typically demarcated by punctuation in written text; but unfortunately, spoken real-world utterances rarely contain punctuation. We address this limitation by distilling punctuation knowledge from a bidirectional teacher language model (LM) trained on written, punctuated text. We compare our segmenter, which is distilled from the LM teacher, against a segmenter distilled from a acoustic-pause-based teacher used in other works, on a streaming ASR pipeline. The pipeline with our segmenter achieves a 3.2% relative WER gain along with a 60 ms median end-of-segment latency reduction on a YouTube captioning task.

* Interspeech 2023. First 3 authors contributed equally 
Viaarxiv icon

Mixture-of-Expert Conformer for Streaming Multilingual ASR

May 25, 2023
Ke Hu, Bo Li, Tara N. Sainath, Yu Zhang, Francoise Beaufays

Figure 1 for Mixture-of-Expert Conformer for Streaming Multilingual ASR
Figure 2 for Mixture-of-Expert Conformer for Streaming Multilingual ASR
Figure 3 for Mixture-of-Expert Conformer for Streaming Multilingual ASR
Figure 4 for Mixture-of-Expert Conformer for Streaming Multilingual ASR

End-to-end models with large capacity have significantly improved multilingual automatic speech recognition, but their computation cost poses challenges for on-device applications. We propose a streaming truly multilingual Conformer incorporating mixture-of-expert (MoE) layers that learn to only activate a subset of parameters in training and inference. The MoE layer consists of a softmax gate which chooses the best two experts among many in forward propagation. The proposed MoE layer offers efficient inference by activating a fixed number of parameters as the number of experts increases. We evaluate the proposed model on a set of 12 languages, and achieve an average 11.9% relative improvement in WER over the baseline. Compared to an adapter model using ground truth information, our MoE model achieves similar WER and activates similar number of parameters but without any language information. We further show around 3% relative WER improvement by multilingual shallow fusion.

* Accepted to Interspeech 2023 
Viaarxiv icon

Modular Domain Adaptation for Conformer-Based Streaming ASR

May 22, 2023
Qiujia Li, Bo Li, Dongseong Hwang, Tara N. Sainath, Pedro M. Mengibar

Figure 1 for Modular Domain Adaptation for Conformer-Based Streaming ASR
Figure 2 for Modular Domain Adaptation for Conformer-Based Streaming ASR
Figure 3 for Modular Domain Adaptation for Conformer-Based Streaming ASR
Figure 4 for Modular Domain Adaptation for Conformer-Based Streaming ASR

Speech data from different domains has distinct acoustic and linguistic characteristics. It is common to train a single multidomain model such as a Conformer transducer for speech recognition on a mixture of data from all domains. However, changing data in one domain or adding a new domain would require the multidomain model to be retrained. To this end, we propose a framework called modular domain adaptation (MDA) that enables a single model to process multidomain data while keeping all parameters domain-specific, i.e., each parameter is only trained by data from one domain. On a streaming Conformer transducer trained only on video caption data, experimental results show that an MDA-based model can reach similar performance as the multidomain model on other domains such as voice search and dictation by adding per-domain adapters and per-domain feed-forward networks in the Conformer encoder.

* Accepted to Interspeech 2023 
Viaarxiv icon

Lego-Features: Exporting modular encoder features for streaming and deliberation ASR

Mar 31, 2023
Rami Botros, Rohit Prabhavalkar, Johan Schalkwyk, Ciprian Chelba, Tara N. Sainath, Françoise Beaufays

Figure 1 for Lego-Features: Exporting modular encoder features for streaming and deliberation ASR
Figure 2 for Lego-Features: Exporting modular encoder features for streaming and deliberation ASR
Figure 3 for Lego-Features: Exporting modular encoder features for streaming and deliberation ASR
Figure 4 for Lego-Features: Exporting modular encoder features for streaming and deliberation ASR

In end-to-end (E2E) speech recognition models, a representational tight-coupling inevitably emerges between the encoder and the decoder. We build upon recent work that has begun to explore building encoders with modular encoded representations, such that encoders and decoders from different models can be stitched together in a zero-shot manner without further fine-tuning. While previous research only addresses full-context speech models, we explore the problem in a streaming setting as well. Our framework builds on top of existing encoded representations, converting them to modular features, dubbed as Lego-Features, without modifying the pre-trained model. The features remain interchangeable when the model is retrained with distinct initializations. Though sparse, we show that the Lego-Features are powerful when tested with RNN-T or LAS decoders, maintaining high-quality downstream performance. They are also rich enough to represent the first-pass prediction during two-pass deliberation. In this scenario, they outperform the N-best hypotheses, since they do not need to be supplemented with acoustic features to deliver the best results. Moreover, generating the Lego-Features does not require beam search or auto-regressive computation. Overall, they present a modular, powerful and cheap alternative to the standard encoder output, as well as the N-best hypotheses.

Viaarxiv icon

Practical Conformer: Optimizing size, speed and flops of Conformer for on-Device and cloud ASR

Mar 31, 2023
Rami Botros, Anmol Gulati, Tara N. Sainath, Krzysztof Choromanski, Ruoming Pang, Trevor Strohman, Weiran Wang, Jiahui Yu

Figure 1 for Practical Conformer: Optimizing size, speed and flops of Conformer for on-Device and cloud ASR
Figure 2 for Practical Conformer: Optimizing size, speed and flops of Conformer for on-Device and cloud ASR
Figure 3 for Practical Conformer: Optimizing size, speed and flops of Conformer for on-Device and cloud ASR
Figure 4 for Practical Conformer: Optimizing size, speed and flops of Conformer for on-Device and cloud ASR

Conformer models maintain a large number of internal states, the vast majority of which are associated with self-attention layers. With limited memory bandwidth, reading these from memory at each inference step can slow down inference. In this paper, we design an optimized conformer that is small enough to meet on-device restrictions and has fast inference on TPUs. We explore various ideas to improve the execution speed, including replacing lower conformer blocks with convolution-only blocks, strategically downsizing the architecture, and utilizing an RNNAttention-Performer. Our optimized conformer can be readily incorporated into a cascaded-encoder setting, allowing a second-pass decoder to operate on its output and improve the accuracy whenever more resources are available. Altogether, we find that these optimizations can reduce latency by a factor of 6.8x, and come at a reasonable trade-off in quality. With the cascaded second-pass, we show that the recognition accuracy is completely recoverable. Thus, our proposed encoder can double as a strong standalone encoder in on device, and as the first part of a high-performance ASR pipeline.

Viaarxiv icon

A Deliberation-based Joint Acoustic and Text Decoder

Mar 23, 2023
Sepand Mavandadi, Tara N. Sainath, Ke Hu, Zelin Wu

Figure 1 for A Deliberation-based Joint Acoustic and Text Decoder
Figure 2 for A Deliberation-based Joint Acoustic and Text Decoder
Figure 3 for A Deliberation-based Joint Acoustic and Text Decoder
Figure 4 for A Deliberation-based Joint Acoustic and Text Decoder

We propose a new two-pass E2E speech recognition model that improves ASR performance by training on a combination of paired data and unpaired text data. Previously, the joint acoustic and text decoder (JATD) has shown promising results through the use of text data during model training and the recently introduced deliberation architecture has reduced recognition errors by leveraging first-pass decoding results. Our method, dubbed Deliberation-JATD, combines the spelling correcting abilities of deliberation with JATD's use of unpaired text data to further improve performance. The proposed model produces substantial gains across multiple test sets, especially those focused on rare words, where it reduces word error rate (WER) by between 12% and 22.5% relative. This is done without increasing model size or requiring multi-stage training, making Deliberation-JATD an efficient candidate for on-device applications.

* Interspeech 2021 
Viaarxiv icon

Sharing Low Rank Conformer Weights for Tiny Always-On Ambient Speech Recognition Models

Mar 15, 2023
Steven M. Hernandez, Ding Zhao, Shaojin Ding, Antoine Bruguier, Rohit Prabhavalkar, Tara N. Sainath, Yanzhang He, Ian McGraw

Figure 1 for Sharing Low Rank Conformer Weights for Tiny Always-On Ambient Speech Recognition Models
Figure 2 for Sharing Low Rank Conformer Weights for Tiny Always-On Ambient Speech Recognition Models
Figure 3 for Sharing Low Rank Conformer Weights for Tiny Always-On Ambient Speech Recognition Models
Figure 4 for Sharing Low Rank Conformer Weights for Tiny Always-On Ambient Speech Recognition Models

Continued improvements in machine learning techniques offer exciting new opportunities through the use of larger models and larger training datasets. However, there is a growing need to offer these new capabilities on-board low-powered devices such as smartphones, wearables and other embedded environments where only low memory is available. Towards this, we consider methods to reduce the model size of Conformer-based speech recognition models which typically require models with greater than 100M parameters down to just $5$M parameters while minimizing impact on model quality. Such a model allows us to achieve always-on ambient speech recognition on edge devices with low-memory neural processors. We propose model weight reuse at different levels within our model architecture: (i) repeating full conformer block layers, (ii) sharing specific conformer modules across layers, (iii) sharing sub-components per conformer module, and (iv) sharing decomposed sub-component weights after low-rank decomposition. By sharing weights at different levels of our model, we can retain the full model in-memory while increasing the number of virtual transformations applied to the input. Through a series of ablation studies and evaluations, we find that with weight sharing and a low-rank architecture, we can achieve a WER of 2.84 and 2.94 for Librispeech dev-clean and test-clean respectively with a $5$M parameter model.

* Accepted to IEEE ICASSP 2023 
Viaarxiv icon