Alert button
Picture for Matthew Snover

Matthew Snover

Alert button

Training Autoregressive Speech Recognition Models with Limited in-domain Supervision

Oct 27, 2022
Chak-Fai Li, Francis Keith, William Hartmann, Matthew Snover

Figure 1 for Training Autoregressive Speech Recognition Models with Limited in-domain Supervision
Figure 2 for Training Autoregressive Speech Recognition Models with Limited in-domain Supervision
Figure 3 for Training Autoregressive Speech Recognition Models with Limited in-domain Supervision
Figure 4 for Training Autoregressive Speech Recognition Models with Limited in-domain Supervision
Viaarxiv icon

Combining Unsupervised and Text Augmented Semi-Supervised Learning for Low Resourced Autoregressive Speech Recognition

Oct 29, 2021
Chak-Fai Li, Francis Keith, William Hartmann, Matthew Snover

Figure 1 for Combining Unsupervised and Text Augmented Semi-Supervised Learning for Low Resourced Autoregressive Speech Recognition
Figure 2 for Combining Unsupervised and Text Augmented Semi-Supervised Learning for Low Resourced Autoregressive Speech Recognition
Figure 3 for Combining Unsupervised and Text Augmented Semi-Supervised Learning for Low Resourced Autoregressive Speech Recognition
Figure 4 for Combining Unsupervised and Text Augmented Semi-Supervised Learning for Low Resourced Autoregressive Speech Recognition
Viaarxiv icon

Overcoming Domain Mismatch in Low Resource Sequence-to-Sequence ASR Models using Hybrid Generated Pseudotranscripts

Jun 14, 2021
Chak-Fai Li, Francis Keith, William Hartmann, Matthew Snover, Owen Kimball

Figure 1 for Overcoming Domain Mismatch in Low Resource Sequence-to-Sequence ASR Models using Hybrid Generated Pseudotranscripts
Figure 2 for Overcoming Domain Mismatch in Low Resource Sequence-to-Sequence ASR Models using Hybrid Generated Pseudotranscripts
Figure 3 for Overcoming Domain Mismatch in Low Resource Sequence-to-Sequence ASR Models using Hybrid Generated Pseudotranscripts
Figure 4 for Overcoming Domain Mismatch in Low Resource Sequence-to-Sequence ASR Models using Hybrid Generated Pseudotranscripts
Viaarxiv icon

Using heterogeneity in semi-supervised transcription hypotheses to improve code-switched speech recognition

Jun 14, 2021
Andrew Slottje, Shannon Wotherspoon, William Hartmann, Matthew Snover, Owen Kimball

Figure 1 for Using heterogeneity in semi-supervised transcription hypotheses to improve code-switched speech recognition
Figure 2 for Using heterogeneity in semi-supervised transcription hypotheses to improve code-switched speech recognition
Figure 3 for Using heterogeneity in semi-supervised transcription hypotheses to improve code-switched speech recognition
Figure 4 for Using heterogeneity in semi-supervised transcription hypotheses to improve code-switched speech recognition
Viaarxiv icon