Alert button
Picture for Tzu-Quan Lin

Tzu-Quan Lin

Alert button

MelHuBERT: A simplified HuBERT on Mel spectrogram

Add code
Bookmark button
Alert button
Nov 17, 2022
Tzu-Quan Lin, Hung-yi Lee, Hao Tang

Figure 1 for MelHuBERT: A simplified HuBERT on Mel spectrogram
Figure 2 for MelHuBERT: A simplified HuBERT on Mel spectrogram
Figure 3 for MelHuBERT: A simplified HuBERT on Mel spectrogram
Figure 4 for MelHuBERT: A simplified HuBERT on Mel spectrogram
Viaarxiv icon

Compressing Transformer-based self-supervised models for speech processing

Add code
Bookmark button
Alert button
Nov 17, 2022
Tzu-Quan Lin, Tsung-Huan Yang, Chun-Yao Chang, Kuang-Ming Chen, Tzu-hsun Feng, Hung-yi Lee, Hao Tang

Figure 1 for Compressing Transformer-based self-supervised models for speech processing
Figure 2 for Compressing Transformer-based self-supervised models for speech processing
Figure 3 for Compressing Transformer-based self-supervised models for speech processing
Figure 4 for Compressing Transformer-based self-supervised models for speech processing
Viaarxiv icon

SUPERB @ SLT 2022: Challenge on Generalization and Efficiency of Self-Supervised Speech Representation Learning

Add code
Bookmark button
Alert button
Oct 16, 2022
Tzu-hsun Feng, Annie Dong, Ching-Feng Yeh, Shu-wen Yang, Tzu-Quan Lin, Jiatong Shi, Kai-Wei Chang, Zili Huang, Haibin Wu, Xuankai Chang, Shinji Watanabe, Abdelrahman Mohamed, Shang-Wen Li, Hung-yi Lee

Figure 1 for SUPERB @ SLT 2022: Challenge on Generalization and Efficiency of Self-Supervised Speech Representation Learning
Figure 2 for SUPERB @ SLT 2022: Challenge on Generalization and Efficiency of Self-Supervised Speech Representation Learning
Figure 3 for SUPERB @ SLT 2022: Challenge on Generalization and Efficiency of Self-Supervised Speech Representation Learning
Figure 4 for SUPERB @ SLT 2022: Challenge on Generalization and Efficiency of Self-Supervised Speech Representation Learning
Viaarxiv icon