Alert button
Picture for Hung-yi Lee

Hung-yi Lee

Alert button

Few-Shot Cross-Lingual TTS Using Transferable Phoneme Embedding

Jun 27, 2022
Wei-Ping Huang, Po-Chun Chen, Sung-Feng Huang, Hung-yi Lee

Figure 1 for Few-Shot Cross-Lingual TTS Using Transferable Phoneme Embedding
Figure 2 for Few-Shot Cross-Lingual TTS Using Transferable Phoneme Embedding
Figure 3 for Few-Shot Cross-Lingual TTS Using Transferable Phoneme Embedding
Figure 4 for Few-Shot Cross-Lingual TTS Using Transferable Phoneme Embedding
Viaarxiv icon

Tackling Spoofing-Aware Speaker Verification with Multi-Model Fusion

Jun 18, 2022
Haibin Wu, Jiawen Kang, Lingwei Meng, Yang Zhang, Xixin Wu, Zhiyong Wu, Hung-yi Lee, Helen Meng

Figure 1 for Tackling Spoofing-Aware Speaker Verification with Multi-Model Fusion
Figure 2 for Tackling Spoofing-Aware Speaker Verification with Multi-Model Fusion
Figure 3 for Tackling Spoofing-Aware Speaker Verification with Multi-Model Fusion
Figure 4 for Tackling Spoofing-Aware Speaker Verification with Multi-Model Fusion
Viaarxiv icon

Few-shot Prompting Towards Controllable Response Generation

Jun 09, 2022
Hsuan Su, Pohan Chi, Shih-Cheng Huang, Chung Ho Lam, Saurav Sahay, Shang-Tse Chen, Hung-yi Lee

Figure 1 for Few-shot Prompting Towards Controllable Response Generation
Figure 2 for Few-shot Prompting Towards Controllable Response Generation
Figure 3 for Few-shot Prompting Towards Controllable Response Generation
Figure 4 for Few-shot Prompting Towards Controllable Response Generation
Viaarxiv icon

Searching for the Essence of Adversarial Perturbations

May 30, 2022
Dennis Y. Menn, Hung-yi Lee

Figure 1 for Searching for the Essence of Adversarial Perturbations
Figure 2 for Searching for the Essence of Adversarial Perturbations
Figure 3 for Searching for the Essence of Adversarial Perturbations
Figure 4 for Searching for the Essence of Adversarial Perturbations
Viaarxiv icon

Structured Prompt Tuning

May 24, 2022
Chi-Liang Liu, Hung-yi Lee, Wen-tau Yih

Figure 1 for Structured Prompt Tuning
Figure 2 for Structured Prompt Tuning
Figure 3 for Structured Prompt Tuning
Figure 4 for Structured Prompt Tuning
Viaarxiv icon

Self-Supervised Speech Representation Learning: A Review

May 21, 2022
Abdelrahman Mohamed, Hung-yi Lee, Lasse Borgholt, Jakob D. Havtorn, Joakim Edin, Christian Igel, Katrin Kirchhoff, Shang-Wen Li, Karen Livescu, Lars Maaløe, Tara N. Sainath, Shinji Watanabe

Figure 1 for Self-Supervised Speech Representation Learning: A Review
Figure 2 for Self-Supervised Speech Representation Learning: A Review
Figure 3 for Self-Supervised Speech Representation Learning: A Review
Figure 4 for Self-Supervised Speech Representation Learning: A Review
Viaarxiv icon

Silence is Sweeter Than Speech: Self-Supervised Model Using Silence to Store Speaker Information

May 08, 2022
Chi-Luen Feng, Po-chun Hsu, Hung-yi Lee

Figure 1 for Silence is Sweeter Than Speech: Self-Supervised Model Using Silence to Store Speaker Information
Figure 2 for Silence is Sweeter Than Speech: Self-Supervised Model Using Silence to Store Speaker Information
Figure 3 for Silence is Sweeter Than Speech: Self-Supervised Model Using Silence to Store Speaker Information
Figure 4 for Silence is Sweeter Than Speech: Self-Supervised Model Using Silence to Store Speaker Information
Viaarxiv icon

Meta Learning for Natural Language Processing: A Survey

May 03, 2022
Hung-yi Lee, Shang-Wen Li, Ngoc Thang Vu

Figure 1 for Meta Learning for Natural Language Processing: A Survey
Figure 2 for Meta Learning for Natural Language Processing: A Survey
Figure 3 for Meta Learning for Natural Language Processing: A Survey
Figure 4 for Meta Learning for Natural Language Processing: A Survey
Viaarxiv icon

AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks

Apr 30, 2022
Chin-Lun Fu, Zih-Ching Chen, Yun-Ru Lee, Hung-yi Lee

Figure 1 for AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks
Figure 2 for AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks
Figure 3 for AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks
Figure 4 for AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks
Viaarxiv icon

XDBERT: Distilling Visual Information to BERT from Cross-Modal Systems to Improve Language Understanding

Apr 29, 2022
Chan-Jan Hsu, Hung-yi Lee, Yu Tsao

Figure 1 for XDBERT: Distilling Visual Information to BERT from Cross-Modal Systems to Improve Language Understanding
Figure 2 for XDBERT: Distilling Visual Information to BERT from Cross-Modal Systems to Improve Language Understanding
Figure 3 for XDBERT: Distilling Visual Information to BERT from Cross-Modal Systems to Improve Language Understanding
Figure 4 for XDBERT: Distilling Visual Information to BERT from Cross-Modal Systems to Improve Language Understanding
Viaarxiv icon