Picture for Kyu Han

Kyu Han

Wav2Seq: Pre-training Speech-to-Text Encoder-Decoder Models Using Pseudo Languages

Add code
May 02, 2022
Figure 1 for Wav2Seq: Pre-training Speech-to-Text Encoder-Decoder Models Using Pseudo Languages
Figure 2 for Wav2Seq: Pre-training Speech-to-Text Encoder-Decoder Models Using Pseudo Languages
Figure 3 for Wav2Seq: Pre-training Speech-to-Text Encoder-Decoder Models Using Pseudo Languages
Figure 4 for Wav2Seq: Pre-training Speech-to-Text Encoder-Decoder Models Using Pseudo Languages
Viaarxiv icon

SRU++: Pioneering Fast Recurrence with Attention for Speech Recognition

Add code
Oct 11, 2021
Figure 1 for SRU++: Pioneering Fast Recurrence with Attention for Speech Recognition
Figure 2 for SRU++: Pioneering Fast Recurrence with Attention for Speech Recognition
Figure 3 for SRU++: Pioneering Fast Recurrence with Attention for Speech Recognition
Figure 4 for SRU++: Pioneering Fast Recurrence with Attention for Speech Recognition
Viaarxiv icon

Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition

Add code
Sep 14, 2021
Figure 1 for Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition
Figure 2 for Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition
Figure 3 for Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition
Figure 4 for Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition
Viaarxiv icon

Speaker Diarization With Lexical Information

Add code
Nov 28, 2018
Figure 1 for Speaker Diarization With Lexical Information
Figure 2 for Speaker Diarization With Lexical Information
Figure 3 for Speaker Diarization With Lexical Information
Figure 4 for Speaker Diarization With Lexical Information
Viaarxiv icon