Alert button
Picture for Po-Han Chi

Po-Han Chi

Alert button

A Large-Scale Evaluation of Speech Foundation Models

Add code
Bookmark button
Alert button
Apr 15, 2024
Shu-wen Yang, Heng-Jui Chang, Zili Huang, Andy T. Liu, Cheng-I Lai, Haibin Wu, Jiatong Shi, Xuankai Chang, Hsiang-Sheng Tsai, Wen-Chin Huang, Tzu-hsun Feng, Po-Han Chi, Yist Y. Lin, Yung-Sung Chuang, Tzu-Hsien Huang, Wei-Cheng Tseng, Kushal Lakhotia, Shang-Wen Li, Abdelrahman Mohamed, Shinji Watanabe, Hung-yi Lee

Viaarxiv icon

Leveraging Sequence Embedding and Convolutional Neural Network for Protein Function Prediction

Add code
Bookmark button
Alert button
Dec 01, 2021
Wei-Cheng Tseng, Po-Han Chi, Jia-Hua Wu, Min Sun

Figure 1 for Leveraging Sequence Embedding and Convolutional Neural Network for Protein Function Prediction
Figure 2 for Leveraging Sequence Embedding and Convolutional Neural Network for Protein Function Prediction
Figure 3 for Leveraging Sequence Embedding and Convolutional Neural Network for Protein Function Prediction
Figure 4 for Leveraging Sequence Embedding and Convolutional Neural Network for Protein Function Prediction
Viaarxiv icon

SpeechNet: A Universal Modularized Model for Speech Processing Tasks

Add code
Bookmark button
Alert button
May 31, 2021
Yi-Chen Chen, Po-Han Chi, Shu-wen Yang, Kai-Wei Chang, Jheng-hao Lin, Sung-Feng Huang, Da-Rong Liu, Chi-Liang Liu, Cheng-Kuang Lee, Hung-yi Lee

Figure 1 for SpeechNet: A Universal Modularized Model for Speech Processing Tasks
Figure 2 for SpeechNet: A Universal Modularized Model for Speech Processing Tasks
Figure 3 for SpeechNet: A Universal Modularized Model for Speech Processing Tasks
Figure 4 for SpeechNet: A Universal Modularized Model for Speech Processing Tasks
Viaarxiv icon

SUPERB: Speech processing Universal PERformance Benchmark

Add code
Bookmark button
Alert button
May 03, 2021
Shu-wen Yang, Po-Han Chi, Yung-Sung Chuang, Cheng-I Jeff Lai, Kushal Lakhotia, Yist Y. Lin, Andy T. Liu, Jiatong Shi, Xuankai Chang, Guan-Ting Lin, Tzu-Hsien Huang, Wei-Cheng Tseng, Ko-tik Lee, Da-Rong Liu, Zili Huang, Shuyan Dong, Shang-Wen Li, Shinji Watanabe, Abdelrahman Mohamed, Hung-yi Lee

Figure 1 for SUPERB: Speech processing Universal PERformance Benchmark
Figure 2 for SUPERB: Speech processing Universal PERformance Benchmark
Viaarxiv icon

Hand-crafted Attention is All You Need? A Study of Attention on Self-supervised Audio Transformer

Add code
Bookmark button
Alert button
Jun 09, 2020
Tsung-Han Wu, Chun-Chen Hsieh, Yen-Hao Chen, Po-Han Chi, Hung-yi Lee

Figure 1 for Hand-crafted Attention is All You Need? A Study of Attention on Self-supervised Audio Transformer
Figure 2 for Hand-crafted Attention is All You Need? A Study of Attention on Self-supervised Audio Transformer
Figure 3 for Hand-crafted Attention is All You Need? A Study of Attention on Self-supervised Audio Transformer
Figure 4 for Hand-crafted Attention is All You Need? A Study of Attention on Self-supervised Audio Transformer
Viaarxiv icon

Audio ALBERT: A Lite BERT for Self-supervised Learning of Audio Representation

Add code
Bookmark button
Alert button
May 26, 2020
Po-Han Chi, Pei-Hung Chung, Tsung-Han Wu, Chun-Cheng Hsieh, Shang-Wen Li, Hung-yi Lee

Figure 1 for Audio ALBERT: A Lite BERT for Self-supervised Learning of Audio Representation
Figure 2 for Audio ALBERT: A Lite BERT for Self-supervised Learning of Audio Representation
Figure 3 for Audio ALBERT: A Lite BERT for Self-supervised Learning of Audio Representation
Figure 4 for Audio ALBERT: A Lite BERT for Self-supervised Learning of Audio Representation
Viaarxiv icon

Further Boosting BERT-based Models by Duplicating Existing Layers: Some Intriguing Phenomena inside BERT

Add code
Bookmark button
Alert button
Jan 25, 2020
Wei-Tsung Kao, Tsung-Han Wu, Po-Han Chi, Chun-Cheng Hsieh, Hung-Yi Lee

Figure 1 for Further Boosting BERT-based Models by Duplicating Existing Layers: Some Intriguing Phenomena inside BERT
Figure 2 for Further Boosting BERT-based Models by Duplicating Existing Layers: Some Intriguing Phenomena inside BERT
Figure 3 for Further Boosting BERT-based Models by Duplicating Existing Layers: Some Intriguing Phenomena inside BERT
Figure 4 for Further Boosting BERT-based Models by Duplicating Existing Layers: Some Intriguing Phenomena inside BERT
Viaarxiv icon

Mockingjay: Unsupervised Speech Representation Learning with Deep Bidirectional Transformer Encoders

Add code
Bookmark button
Alert button
Oct 25, 2019
Andy T. Liu, Shu-wen Yang, Po-Han Chi, Po-chun Hsu, Hung-yi Lee

Figure 1 for Mockingjay: Unsupervised Speech Representation Learning with Deep Bidirectional Transformer Encoders
Figure 2 for Mockingjay: Unsupervised Speech Representation Learning with Deep Bidirectional Transformer Encoders
Figure 3 for Mockingjay: Unsupervised Speech Representation Learning with Deep Bidirectional Transformer Encoders
Figure 4 for Mockingjay: Unsupervised Speech Representation Learning with Deep Bidirectional Transformer Encoders
Viaarxiv icon