Alert button
Picture for Yu-Hua Chen

Yu-Hua Chen

Alert button

GTR-CTRL: Instrument and Genre Conditioning for Guitar-Focused Music Generation with Transformers

Add code
Bookmark button
Alert button
Feb 10, 2023
Pedro Sarmento, Adarsh Kumar, Yu-Hua Chen, CJ Carr, Zack Zukowski, Mathieu Barthet

Figure 1 for GTR-CTRL: Instrument and Genre Conditioning for Guitar-Focused Music Generation with Transformers
Figure 2 for GTR-CTRL: Instrument and Genre Conditioning for Guitar-Focused Music Generation with Transformers
Figure 3 for GTR-CTRL: Instrument and Genre Conditioning for Guitar-Focused Music Generation with Transformers
Figure 4 for GTR-CTRL: Instrument and Genre Conditioning for Guitar-Focused Music Generation with Transformers
Viaarxiv icon

towards automatic transcription of polyphonic electric guitar music:a new dataset and a multi-loss transformer model

Add code
Bookmark button
Alert button
Feb 20, 2022
Yu-Hua Chen, Wen-Yi Hsiao, Tsu-Kuang Hsieh, Jyh-Shing Roger Jang, Yi-Hsuan Yang

Figure 1 for towards automatic transcription of polyphonic electric guitar music:a new dataset and a multi-loss transformer model
Figure 2 for towards automatic transcription of polyphonic electric guitar music:a new dataset and a multi-loss transformer model
Figure 3 for towards automatic transcription of polyphonic electric guitar music:a new dataset and a multi-loss transformer model
Figure 4 for towards automatic transcription of polyphonic electric guitar music:a new dataset and a multi-loss transformer model
Viaarxiv icon

Source Separation-based Data Augmentation for Improved Joint Beat and Downbeat Tracking

Add code
Bookmark button
Alert button
Jun 16, 2021
Ching-Yu Chiu, Joann Ching, Wen-Yi Hsiao, Yu-Hua Chen, Alvin Wen-Yu Su, Yi-Hsuan Yang

Figure 1 for Source Separation-based Data Augmentation for Improved Joint Beat and Downbeat Tracking
Figure 2 for Source Separation-based Data Augmentation for Improved Joint Beat and Downbeat Tracking
Figure 3 for Source Separation-based Data Augmentation for Improved Joint Beat and Downbeat Tracking
Figure 4 for Source Separation-based Data Augmentation for Improved Joint Beat and Downbeat Tracking
Viaarxiv icon

Unconditional Audio Generation with Generative Adversarial Networks and Cycle Regularization

Add code
Bookmark button
Alert button
May 18, 2020
Jen-Yu Liu, Yu-Hua Chen, Yin-Cheng Yeh, Yi-Hsuan Yang

Figure 1 for Unconditional Audio Generation with Generative Adversarial Networks and Cycle Regularization
Figure 2 for Unconditional Audio Generation with Generative Adversarial Networks and Cycle Regularization
Figure 3 for Unconditional Audio Generation with Generative Adversarial Networks and Cycle Regularization
Figure 4 for Unconditional Audio Generation with Generative Adversarial Networks and Cycle Regularization
Viaarxiv icon

Score and Lyrics-Free Singing Voice Generation

Add code
Bookmark button
Alert button
Dec 26, 2019
Jen-Yu Liu, Yu-Hua Chen, Yin-Cheng Yeh, Yi-Hsuan Yang

Figure 1 for Score and Lyrics-Free Singing Voice Generation
Figure 2 for Score and Lyrics-Free Singing Voice Generation
Figure 3 for Score and Lyrics-Free Singing Voice Generation
Figure 4 for Score and Lyrics-Free Singing Voice Generation
Viaarxiv icon

Dixit: Interactive Visual Storytelling via Term Manipulation

Add code
Bookmark button
Alert button
Mar 11, 2019
Chao-Chun Hsu, Yu-Hua Chen, Zi-Yuan Chen, Hsin-Yu Lin, Ting-Hao 'Kenneth' Huang, Lun-Wei Ku

Figure 1 for Dixit: Interactive Visual Storytelling via Term Manipulation
Figure 2 for Dixit: Interactive Visual Storytelling via Term Manipulation
Figure 3 for Dixit: Interactive Visual Storytelling via Term Manipulation
Figure 4 for Dixit: Interactive Visual Storytelling via Term Manipulation
Viaarxiv icon