Alert button
Picture for Hung-Yi Lee

Hung-Yi Lee

Alert button

Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models' Transferability

Add code
Bookmark button
Alert button
Mar 12, 2021
Wei-Tsung Kao, Hung-Yi Lee

Figure 1 for Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models' Transferability
Figure 2 for Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models' Transferability
Figure 3 for Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models' Transferability
Figure 4 for Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models' Transferability
Viaarxiv icon

TaylorGAN: Neighbor-Augmented Policy Update for Sample-Efficient Natural Language Generation

Add code
Bookmark button
Alert button
Nov 27, 2020
Chun-Hsing Lin, Siang-Ruei Wu, Hung-Yi Lee, Yun-Nung Chen

Figure 1 for TaylorGAN: Neighbor-Augmented Policy Update for Sample-Efficient Natural Language Generation
Figure 2 for TaylorGAN: Neighbor-Augmented Policy Update for Sample-Efficient Natural Language Generation
Figure 3 for TaylorGAN: Neighbor-Augmented Policy Update for Sample-Efficient Natural Language Generation
Figure 4 for TaylorGAN: Neighbor-Augmented Policy Update for Sample-Efficient Natural Language Generation
Viaarxiv icon

Semi-Supervised Spoken Language Understanding via Self-Supervised Speech and Language Model Pretraining

Add code
Bookmark button
Alert button
Oct 26, 2020
Cheng-I Lai, Yung-Sung Chuang, Hung-Yi Lee, Shang-Wen Li, James Glass

Figure 1 for Semi-Supervised Spoken Language Understanding via Self-Supervised Speech and Language Model Pretraining
Figure 2 for Semi-Supervised Spoken Language Understanding via Self-Supervised Speech and Language Model Pretraining
Figure 3 for Semi-Supervised Spoken Language Understanding via Self-Supervised Speech and Language Model Pretraining
Figure 4 for Semi-Supervised Spoken Language Understanding via Self-Supervised Speech and Language Model Pretraining
Viaarxiv icon

VQVC+: One-Shot Voice Conversion by Vector Quantization and U-Net architecture

Add code
Bookmark button
Alert button
Jun 07, 2020
Da-Yi Wu, Yen-Hao Chen, Hung-Yi Lee

Figure 1 for VQVC+: One-Shot Voice Conversion by Vector Quantization and U-Net architecture
Figure 2 for VQVC+: One-Shot Voice Conversion by Vector Quantization and U-Net architecture
Figure 3 for VQVC+: One-Shot Voice Conversion by Vector Quantization and U-Net architecture
Figure 4 for VQVC+: One-Shot Voice Conversion by Vector Quantization and U-Net architecture
Viaarxiv icon

Learning Interpretable and Discrete Representations with Adversarial Training for Unsupervised Text Classification

Add code
Bookmark button
Alert button
Apr 28, 2020
Yau-Shian Wang, Hung-Yi Lee, Yun-Nung Chen

Figure 1 for Learning Interpretable and Discrete Representations with Adversarial Training for Unsupervised Text Classification
Figure 2 for Learning Interpretable and Discrete Representations with Adversarial Training for Unsupervised Text Classification
Figure 3 for Learning Interpretable and Discrete Representations with Adversarial Training for Unsupervised Text Classification
Figure 4 for Learning Interpretable and Discrete Representations with Adversarial Training for Unsupervised Text Classification
Viaarxiv icon

A Study of Cross-Lingual Ability and Language-specific Information in Multilingual BERT

Add code
Bookmark button
Alert button
Apr 20, 2020
Chi-Liang Liu, Tsung-Yuan Hsu, Yung-Sung Chuang, Hung-Yi Lee

Figure 1 for A Study of Cross-Lingual Ability and Language-specific Information in Multilingual BERT
Figure 2 for A Study of Cross-Lingual Ability and Language-specific Information in Multilingual BERT
Figure 3 for A Study of Cross-Lingual Ability and Language-specific Information in Multilingual BERT
Figure 4 for A Study of Cross-Lingual Ability and Language-specific Information in Multilingual BERT
Viaarxiv icon

Further Boosting BERT-based Models by Duplicating Existing Layers: Some Intriguing Phenomena inside BERT

Add code
Bookmark button
Alert button
Jan 25, 2020
Wei-Tsung Kao, Tsung-Han Wu, Po-Han Chi, Chun-Cheng Hsieh, Hung-Yi Lee

Figure 1 for Further Boosting BERT-based Models by Duplicating Existing Layers: Some Intriguing Phenomena inside BERT
Figure 2 for Further Boosting BERT-based Models by Duplicating Existing Layers: Some Intriguing Phenomena inside BERT
Figure 3 for Further Boosting BERT-based Models by Duplicating Existing Layers: Some Intriguing Phenomena inside BERT
Figure 4 for Further Boosting BERT-based Models by Duplicating Existing Layers: Some Intriguing Phenomena inside BERT
Viaarxiv icon

J-Net: Randomly weighted U-Net for audio source separation

Add code
Bookmark button
Alert button
Nov 29, 2019
Bo-Wen Chen, Yen-Min Hsu, Hung-Yi Lee

Figure 1 for J-Net: Randomly weighted U-Net for audio source separation
Figure 2 for J-Net: Randomly weighted U-Net for audio source separation
Figure 3 for J-Net: Randomly weighted U-Net for audio source separation
Figure 4 for J-Net: Randomly weighted U-Net for audio source separation
Viaarxiv icon

Training a code-switching language model with monolingual data

Add code
Bookmark button
Alert button
Nov 14, 2019
Shun-Po Chuang, Tzu-Wei Sung, Hung-Yi Lee

Figure 1 for Training a code-switching language model with monolingual data
Figure 2 for Training a code-switching language model with monolingual data
Figure 3 for Training a code-switching language model with monolingual data
Figure 4 for Training a code-switching language model with monolingual data
Viaarxiv icon

What does a network layer hear? Analyzing hidden representations of end-to-end ASR through speech synthesis

Add code
Bookmark button
Alert button
Nov 04, 2019
Chung-Yi Li, Pei-Chieh Yuan, Hung-Yi Lee

Figure 1 for What does a network layer hear? Analyzing hidden representations of end-to-end ASR through speech synthesis
Figure 2 for What does a network layer hear? Analyzing hidden representations of end-to-end ASR through speech synthesis
Figure 3 for What does a network layer hear? Analyzing hidden representations of end-to-end ASR through speech synthesis
Figure 4 for What does a network layer hear? Analyzing hidden representations of end-to-end ASR through speech synthesis
Viaarxiv icon