Alert button
Picture for Hung-yi Lee

Hung-yi Lee

Alert button

An Exploration of In-Context Learning for Speech Language Model

Oct 19, 2023
Ming-Hao Hsu, Kai-Wei Chang, Shang-Wen Li, Hung-yi Lee

Figure 1 for An Exploration of In-Context Learning for Speech Language Model
Figure 2 for An Exploration of In-Context Learning for Speech Language Model
Figure 3 for An Exploration of In-Context Learning for Speech Language Model
Figure 4 for An Exploration of In-Context Learning for Speech Language Model
Viaarxiv icon

Learning from Red Teaming: Gender Bias Provocation and Mitigation in Large Language Models

Oct 17, 2023
Hsuan Su, Cheng-Chu Cheng, Hua Farn, Shachi H Kumar, Saurav Sahay, Shang-Tse Chen, Hung-yi Lee

Viaarxiv icon

A Closer Look into Automatic Evaluation Using Large Language Models

Oct 09, 2023
Cheng-Han Chiang, Hung-yi Lee

Viaarxiv icon

Findings of the 2023 ML-SUPERB Challenge: Pre-Training and Evaluation over More Languages and Beyond

Oct 09, 2023
Jiatong Shi, William Chen, Dan Berrebbi, Hsiu-Hsuan Wang, Wei-Ping Huang, En-Pei Hu, Ho-Lam Chuang, Xuankai Chang, Yuxun Tang, Shang-Wen Li, Abdelrahman Mohamed, Hung-yi Lee, Shinji Watanabe

Figure 1 for Findings of the 2023 ML-SUPERB Challenge: Pre-Training and Evaluation over More Languages and Beyond
Figure 2 for Findings of the 2023 ML-SUPERB Challenge: Pre-Training and Evaluation over More Languages and Beyond
Figure 3 for Findings of the 2023 ML-SUPERB Challenge: Pre-Training and Evaluation over More Languages and Beyond
Figure 4 for Findings of the 2023 ML-SUPERB Challenge: Pre-Training and Evaluation over More Languages and Beyond
Viaarxiv icon

Chat Vector: A Simple Approach to Equip LLMs With New Language Chat Capabilities

Oct 07, 2023
Shih-Cheng Huang, Pin-Zu Li, Yu-Chi Hsu, Kuang-Ming Chen, Yu Tung Lin, Shih-Kai Hsiao, Richard Tzong-Han Tsai, Hung-yi Lee

Viaarxiv icon

Zero Resource Code-switched Speech Benchmark Using Speech Utterance Pairs For Multiple Spoken Languages

Oct 04, 2023
Kuan-Po Huang, Chih-Kai Yang, Yu-Kuan Fu, Ewan Dunbar, Hung-yi Lee

Figure 1 for Zero Resource Code-switched Speech Benchmark Using Speech Utterance Pairs For Multiple Spoken Languages
Figure 2 for Zero Resource Code-switched Speech Benchmark Using Speech Utterance Pairs For Multiple Spoken Languages
Figure 3 for Zero Resource Code-switched Speech Benchmark Using Speech Utterance Pairs For Multiple Spoken Languages
Figure 4 for Zero Resource Code-switched Speech Benchmark Using Speech Utterance Pairs For Multiple Spoken Languages
Viaarxiv icon

Prompting and Adapter Tuning for Self-supervised Encoder-Decoder Speech Model

Oct 04, 2023
Kai-Wei Chang, Ming-Hsin Chen, Yun-Ping Lin, Jing Neng Hsu, Paul Kuo-Ming Huang, Chien-yu Huang, Shang-Wen Li, Hung-yi Lee

Viaarxiv icon

Low-Resource Self-Supervised Learning with SSL-Enhanced TTS

Sep 29, 2023
Po-chun Hsu, Ali Elkahky, Wei-Ning Hsu, Yossi Adi, Tu Anh Nguyen, Jade Copet, Emmanuel Dupoux, Hung-yi Lee, Abdelrahman Mohamed

Figure 1 for Low-Resource Self-Supervised Learning with SSL-Enhanced TTS
Figure 2 for Low-Resource Self-Supervised Learning with SSL-Enhanced TTS
Figure 3 for Low-Resource Self-Supervised Learning with SSL-Enhanced TTS
Figure 4 for Low-Resource Self-Supervised Learning with SSL-Enhanced TTS
Viaarxiv icon

Investigating Human-Identifiable Features Hidden in Adversarial Perturbations

Sep 28, 2023
Dennis Y. Menn, Tzu-hsun Feng, Sriram Vishwanath, Hung-yi Lee

Viaarxiv icon

Towards General-Purpose Text-Instruction-Guided Voice Conversion

Sep 25, 2023
Chun-Yi Kuan, Chen An Li, Tsu-Yuan Hsu, Tse-Yang Lin, Ho-Lam Chung, Kai-Wei Chang, Shuo-yiin Chang, Hung-yi Lee

Figure 1 for Towards General-Purpose Text-Instruction-Guided Voice Conversion
Figure 2 for Towards General-Purpose Text-Instruction-Guided Voice Conversion
Figure 3 for Towards General-Purpose Text-Instruction-Guided Voice Conversion
Figure 4 for Towards General-Purpose Text-Instruction-Guided Voice Conversion
Viaarxiv icon