Alert button
Picture for Kuan-Po Huang

Kuan-Po Huang

Alert button

Investigating Zero-Shot Generalizability on Mandarin-English Code-Switched ASR and Speech-to-text Translation of Recent Foundation Models with Self-Supervision and Weak Supervision

Add code
Bookmark button
Alert button
Dec 30, 2023
Chih-Kai Yang, Kuan-Po Huang, Ke-Han Lu, Chun-Yi Kuan, Chi-Yuan Hsiao, Hung-yi Lee

Viaarxiv icon

Noise robust distillation of self-supervised speech models via correlation metrics

Add code
Bookmark button
Alert button
Dec 19, 2023
Fabian Ritter-Gutierrez, Kuan-Po Huang, Dianwen Ng, Jeremy H. M. Wong, Hung-yi Lee, Eng Siong Chng, Nancy F. Chen

Viaarxiv icon

Zero Resource Code-switched Speech Benchmark Using Speech Utterance Pairs For Multiple Spoken Languages

Add code
Bookmark button
Alert button
Oct 04, 2023
Kuan-Po Huang, Chih-Kai Yang, Yu-Kuan Fu, Ewan Dunbar, Hung-yi Lee

Figure 1 for Zero Resource Code-switched Speech Benchmark Using Speech Utterance Pairs For Multiple Spoken Languages
Figure 2 for Zero Resource Code-switched Speech Benchmark Using Speech Utterance Pairs For Multiple Spoken Languages
Figure 3 for Zero Resource Code-switched Speech Benchmark Using Speech Utterance Pairs For Multiple Spoken Languages
Figure 4 for Zero Resource Code-switched Speech Benchmark Using Speech Utterance Pairs For Multiple Spoken Languages
Viaarxiv icon

Ensemble knowledge distillation of self-supervised speech models

Add code
Bookmark button
Alert button
Feb 24, 2023
Kuan-Po Huang, Tzu-hsun Feng, Yu-Kuan Fu, Tsu-Yuan Hsu, Po-Chieh Yen, Wei-Cheng Tseng, Kai-Wei Chang, Hung-yi Lee

Figure 1 for Ensemble knowledge distillation of self-supervised speech models
Figure 2 for Ensemble knowledge distillation of self-supervised speech models
Viaarxiv icon

Improving generalizability of distilled self-supervised speech processing models under distorted settings

Add code
Bookmark button
Alert button
Oct 20, 2022
Kuan-Po Huang, Yu-Kuan Fu, Tsu-Yuan Hsu, Fabian Ritter Gutierrez, Fan-Lin Wang, Liang-Hsuan Tseng, Yu Zhang, Hung-yi Lee

Figure 1 for Improving generalizability of distilled self-supervised speech processing models under distorted settings
Figure 2 for Improving generalizability of distilled self-supervised speech processing models under distorted settings
Figure 3 for Improving generalizability of distilled self-supervised speech processing models under distorted settings
Figure 4 for Improving generalizability of distilled self-supervised speech processing models under distorted settings
Viaarxiv icon

Improving the transferability of speech separation by meta-learning

Add code
Bookmark button
Alert button
Mar 11, 2022
Kuan-Po Huang, Yuan-Kuei Wu, Hung-yi Lee

Figure 1 for Improving the transferability of speech separation by meta-learning
Figure 2 for Improving the transferability of speech separation by meta-learning
Figure 3 for Improving the transferability of speech separation by meta-learning
Viaarxiv icon

Multi-accent Speech Separation with One Shot Learning

Add code
Bookmark button
Alert button
Jun 28, 2021
Kuan-Po Huang, Yuan-Kuei Wu, Hung-yi Lee

Figure 1 for Multi-accent Speech Separation with One Shot Learning
Figure 2 for Multi-accent Speech Separation with One Shot Learning
Figure 3 for Multi-accent Speech Separation with One Shot Learning
Figure 4 for Multi-accent Speech Separation with One Shot Learning
Viaarxiv icon