Alert button
Picture for Richard Cave

Richard Cave

Alert button

An analysis of degenerating speech due to progressive dysarthria on ASR performance

Oct 31, 2022
Katrin Tomanek, Katie Seaver, Pan-Pan Jiang, Richard Cave, Lauren Harrel, Jordan R. Green

Figure 1 for An analysis of degenerating speech due to progressive dysarthria on ASR performance
Figure 2 for An analysis of degenerating speech due to progressive dysarthria on ASR performance
Figure 3 for An analysis of degenerating speech due to progressive dysarthria on ASR performance
Figure 4 for An analysis of degenerating speech due to progressive dysarthria on ASR performance

Although personalized automatic speech recognition (ASR) models have recently been designed to recognize even severely impaired speech, model performance may degrade over time for persons with degenerating speech. The aims of this study were to (1) analyze the change of performance of ASR over time in individuals with degrading speech, and (2) explore mitigation strategies to optimize recognition throughout disease progression. Speech was recorded by four individuals with degrading speech due to amyotrophic lateral sclerosis (ALS). Word error rates (WER) across recording sessions were computed for three ASR models: Unadapted Speaker Independent (U-SI), Adapted Speaker Independent (A-SI), and Adapted Speaker Dependent (A-SD or personalized). The performance of all three models degraded significantly over time as speech became more impaired, but the performance of the A-SD model improved markedly when it was updated with recordings from the severe stages of speech progression. Recording additional utterances early in the disease before speech degraded significantly did not improve the performance of A-SD models. Overall, our findings emphasize the importance of continuous recording (and model retraining) when providing personalized models for individuals with progressive speech impairments.

* Submitted to ICASSP 2023 
Viaarxiv icon

Assessing ASR Model Quality on Disordered Speech using BERTScore

Sep 21, 2022
Jimmy Tobin, Qisheng Li, Subhashini Venugopalan, Katie Seaver, Richard Cave, Katrin Tomanek

Figure 1 for Assessing ASR Model Quality on Disordered Speech using BERTScore
Figure 2 for Assessing ASR Model Quality on Disordered Speech using BERTScore
Figure 3 for Assessing ASR Model Quality on Disordered Speech using BERTScore
Figure 4 for Assessing ASR Model Quality on Disordered Speech using BERTScore

Word Error Rate (WER) is the primary metric used to assess automatic speech recognition (ASR) model quality. It has been shown that ASR models tend to have much higher WER on speakers with speech impairments than typical English speakers. It is hard to determine if models can be be useful at such high error rates. This study investigates the use of BERTScore, an evaluation metric for text generation, to provide a more informative measure of ASR model quality and usefulness. Both BERTScore and WER were compared to prediction errors manually annotated by Speech Language Pathologists for error type and assessment. BERTScore was found to be more correlated with human assessment of error type and assessment. BERTScore was specifically more robust to orthographic changes (contraction and normalization errors) where meaning was preserved. Furthermore, BERTScore was a better fit of error assessment than WER, as measured using an ordinal logistic regression and the Akaike's Information Criterion (AIC). Overall, our findings suggest that BERTScore can complement WER when assessing ASR model performance from a practical perspective, especially for accessibility applications where models are useful even at lower accuracy than for typical speech.

* Accepted to Interspeech 2022 Workshop on Speech for Social Good 
Viaarxiv icon