Alert button
Picture for William Hartmann

William Hartmann

Alert button

Using i-vectors for subject-independent cross-session EEG transfer learning

Add code
Bookmark button
Alert button
Jan 16, 2024
Jonathan Lasko, Jeff Ma, Mike Nicoletti, Jonathan Sussman-Fort, Sooyoung Jeong, William Hartmann

Viaarxiv icon

Training Autoregressive Speech Recognition Models with Limited in-domain Supervision

Add code
Bookmark button
Alert button
Oct 27, 2022
Chak-Fai Li, Francis Keith, William Hartmann, Matthew Snover

Figure 1 for Training Autoregressive Speech Recognition Models with Limited in-domain Supervision
Figure 2 for Training Autoregressive Speech Recognition Models with Limited in-domain Supervision
Figure 3 for Training Autoregressive Speech Recognition Models with Limited in-domain Supervision
Figure 4 for Training Autoregressive Speech Recognition Models with Limited in-domain Supervision
Viaarxiv icon

Combining Unsupervised and Text Augmented Semi-Supervised Learning for Low Resourced Autoregressive Speech Recognition

Add code
Bookmark button
Alert button
Oct 29, 2021
Chak-Fai Li, Francis Keith, William Hartmann, Matthew Snover

Figure 1 for Combining Unsupervised and Text Augmented Semi-Supervised Learning for Low Resourced Autoregressive Speech Recognition
Figure 2 for Combining Unsupervised and Text Augmented Semi-Supervised Learning for Low Resourced Autoregressive Speech Recognition
Figure 3 for Combining Unsupervised and Text Augmented Semi-Supervised Learning for Low Resourced Autoregressive Speech Recognition
Figure 4 for Combining Unsupervised and Text Augmented Semi-Supervised Learning for Low Resourced Autoregressive Speech Recognition
Viaarxiv icon

Overcoming Domain Mismatch in Low Resource Sequence-to-Sequence ASR Models using Hybrid Generated Pseudotranscripts

Add code
Bookmark button
Alert button
Jun 14, 2021
Chak-Fai Li, Francis Keith, William Hartmann, Matthew Snover, Owen Kimball

Figure 1 for Overcoming Domain Mismatch in Low Resource Sequence-to-Sequence ASR Models using Hybrid Generated Pseudotranscripts
Figure 2 for Overcoming Domain Mismatch in Low Resource Sequence-to-Sequence ASR Models using Hybrid Generated Pseudotranscripts
Figure 3 for Overcoming Domain Mismatch in Low Resource Sequence-to-Sequence ASR Models using Hybrid Generated Pseudotranscripts
Figure 4 for Overcoming Domain Mismatch in Low Resource Sequence-to-Sequence ASR Models using Hybrid Generated Pseudotranscripts
Viaarxiv icon

Using heterogeneity in semi-supervised transcription hypotheses to improve code-switched speech recognition

Add code
Bookmark button
Alert button
Jun 14, 2021
Andrew Slottje, Shannon Wotherspoon, William Hartmann, Matthew Snover, Owen Kimball

Figure 1 for Using heterogeneity in semi-supervised transcription hypotheses to improve code-switched speech recognition
Figure 2 for Using heterogeneity in semi-supervised transcription hypotheses to improve code-switched speech recognition
Figure 3 for Using heterogeneity in semi-supervised transcription hypotheses to improve code-switched speech recognition
Figure 4 for Using heterogeneity in semi-supervised transcription hypotheses to improve code-switched speech recognition
Viaarxiv icon

Learning from Noisy Labels with Noise Modeling Network

Add code
Bookmark button
Alert button
May 01, 2020
Zhuolin Jiang, Jan Silovsky, Man-Hung Siu, William Hartmann, Herbert Gish, Sancar Adali

Figure 1 for Learning from Noisy Labels with Noise Modeling Network
Figure 2 for Learning from Noisy Labels with Noise Modeling Network
Figure 3 for Learning from Noisy Labels with Noise Modeling Network
Figure 4 for Learning from Noisy Labels with Noise Modeling Network
Viaarxiv icon

Cross-lingual Information Retrieval with BERT

Add code
Bookmark button
Alert button
Apr 24, 2020
Zhuolin Jiang, Amro El-Jaroudi, William Hartmann, Damianos Karakos, Lingjun Zhao

Figure 1 for Cross-lingual Information Retrieval with BERT
Figure 2 for Cross-lingual Information Retrieval with BERT
Figure 3 for Cross-lingual Information Retrieval with BERT
Figure 4 for Cross-lingual Information Retrieval with BERT
Viaarxiv icon

Towards a New Understanding of the Training of Neural Networks with Mislabeled Training Data

Add code
Bookmark button
Alert button
Sep 18, 2019
Herbert Gish, Jan Silovsky, Man-Ling Sung, Man-Hung Siu, William Hartmann, Zhuolin Jiang

Figure 1 for Towards a New Understanding of the Training of Neural Networks with Mislabeled Training Data
Figure 2 for Towards a New Understanding of the Training of Neural Networks with Mislabeled Training Data
Figure 3 for Towards a New Understanding of the Training of Neural Networks with Mislabeled Training Data
Viaarxiv icon