Picture for Hung-Yi Lee

Hung-Yi Lee

S3PRL-VC: Open-source Voice Conversion Framework with Self-supervised Speech Representations

Add code
Oct 12, 2021
Figure 1 for S3PRL-VC: Open-source Voice Conversion Framework with Self-supervised Speech Representations
Figure 2 for S3PRL-VC: Open-source Voice Conversion Framework with Self-supervised Speech Representations
Figure 3 for S3PRL-VC: Open-source Voice Conversion Framework with Self-supervised Speech Representations
Figure 4 for S3PRL-VC: Open-source Voice Conversion Framework with Self-supervised Speech Representations
Viaarxiv icon

Analyzing the Robustness of Unsupervised Speech Recognition

Add code
Oct 12, 2021
Figure 1 for Analyzing the Robustness of Unsupervised Speech Recognition
Figure 2 for Analyzing the Robustness of Unsupervised Speech Recognition
Figure 3 for Analyzing the Robustness of Unsupervised Speech Recognition
Figure 4 for Analyzing the Robustness of Unsupervised Speech Recognition
Viaarxiv icon

CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement Learning

Add code
Oct 08, 2021
Figure 1 for CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement Learning
Figure 2 for CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement Learning
Figure 3 for CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement Learning
Figure 4 for CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement Learning
Viaarxiv icon

Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models' Transferability

Add code
Mar 12, 2021
Figure 1 for Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models' Transferability
Figure 2 for Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models' Transferability
Figure 3 for Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models' Transferability
Figure 4 for Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models' Transferability
Viaarxiv icon

TaylorGAN: Neighbor-Augmented Policy Update for Sample-Efficient Natural Language Generation

Add code
Nov 27, 2020
Figure 1 for TaylorGAN: Neighbor-Augmented Policy Update for Sample-Efficient Natural Language Generation
Figure 2 for TaylorGAN: Neighbor-Augmented Policy Update for Sample-Efficient Natural Language Generation
Figure 3 for TaylorGAN: Neighbor-Augmented Policy Update for Sample-Efficient Natural Language Generation
Figure 4 for TaylorGAN: Neighbor-Augmented Policy Update for Sample-Efficient Natural Language Generation
Viaarxiv icon

Semi-Supervised Spoken Language Understanding via Self-Supervised Speech and Language Model Pretraining

Add code
Oct 26, 2020
Figure 1 for Semi-Supervised Spoken Language Understanding via Self-Supervised Speech and Language Model Pretraining
Figure 2 for Semi-Supervised Spoken Language Understanding via Self-Supervised Speech and Language Model Pretraining
Figure 3 for Semi-Supervised Spoken Language Understanding via Self-Supervised Speech and Language Model Pretraining
Figure 4 for Semi-Supervised Spoken Language Understanding via Self-Supervised Speech and Language Model Pretraining
Viaarxiv icon

VQVC+: One-Shot Voice Conversion by Vector Quantization and U-Net architecture

Add code
Jun 07, 2020
Figure 1 for VQVC+: One-Shot Voice Conversion by Vector Quantization and U-Net architecture
Figure 2 for VQVC+: One-Shot Voice Conversion by Vector Quantization and U-Net architecture
Figure 3 for VQVC+: One-Shot Voice Conversion by Vector Quantization and U-Net architecture
Figure 4 for VQVC+: One-Shot Voice Conversion by Vector Quantization and U-Net architecture
Viaarxiv icon

Learning Interpretable and Discrete Representations with Adversarial Training for Unsupervised Text Classification

Add code
Apr 28, 2020
Figure 1 for Learning Interpretable and Discrete Representations with Adversarial Training for Unsupervised Text Classification
Figure 2 for Learning Interpretable and Discrete Representations with Adversarial Training for Unsupervised Text Classification
Figure 3 for Learning Interpretable and Discrete Representations with Adversarial Training for Unsupervised Text Classification
Figure 4 for Learning Interpretable and Discrete Representations with Adversarial Training for Unsupervised Text Classification
Viaarxiv icon

A Study of Cross-Lingual Ability and Language-specific Information in Multilingual BERT

Add code
Apr 20, 2020
Figure 1 for A Study of Cross-Lingual Ability and Language-specific Information in Multilingual BERT
Figure 2 for A Study of Cross-Lingual Ability and Language-specific Information in Multilingual BERT
Figure 3 for A Study of Cross-Lingual Ability and Language-specific Information in Multilingual BERT
Figure 4 for A Study of Cross-Lingual Ability and Language-specific Information in Multilingual BERT
Viaarxiv icon

Further Boosting BERT-based Models by Duplicating Existing Layers: Some Intriguing Phenomena inside BERT

Add code
Jan 25, 2020
Figure 1 for Further Boosting BERT-based Models by Duplicating Existing Layers: Some Intriguing Phenomena inside BERT
Figure 2 for Further Boosting BERT-based Models by Duplicating Existing Layers: Some Intriguing Phenomena inside BERT
Figure 3 for Further Boosting BERT-based Models by Duplicating Existing Layers: Some Intriguing Phenomena inside BERT
Figure 4 for Further Boosting BERT-based Models by Duplicating Existing Layers: Some Intriguing Phenomena inside BERT
Viaarxiv icon