Alert button
Picture for Yi-Hsuan Yang

Yi-Hsuan Yang

Alert button

Theme Transformer: Symbolic Music Generation with Theme-Conditioned Transformer

Nov 07, 2021
Yi-Jen Shih, Shih-Lun Wu, Frank Zalkow, Meinard Müller, Yi-Hsuan Yang

Figure 1 for Theme Transformer: Symbolic Music Generation with Theme-Conditioned Transformer
Figure 2 for Theme Transformer: Symbolic Music Generation with Theme-Conditioned Transformer
Figure 3 for Theme Transformer: Symbolic Music Generation with Theme-Conditioned Transformer
Figure 4 for Theme Transformer: Symbolic Music Generation with Theme-Conditioned Transformer
Viaarxiv icon

Learning To Generate Piano Music With Sustain Pedals

Nov 01, 2021
Joann Ching, Yi-Hsuan Yang

Figure 1 for Learning To Generate Piano Music With Sustain Pedals
Figure 2 for Learning To Generate Piano Music With Sustain Pedals
Figure 3 for Learning To Generate Piano Music With Sustain Pedals
Viaarxiv icon

Deep Learning Based EDM Subgenre Classification using Mel-Spectrogram and Tempogram Features

Oct 17, 2021
Wei-Han Hsu, Bo-Yu Chen, Yi-Hsuan Yang

Figure 1 for Deep Learning Based EDM Subgenre Classification using Mel-Spectrogram and Tempogram Features
Figure 2 for Deep Learning Based EDM Subgenre Classification using Mel-Spectrogram and Tempogram Features
Figure 3 for Deep Learning Based EDM Subgenre Classification using Mel-Spectrogram and Tempogram Features
Figure 4 for Deep Learning Based EDM Subgenre Classification using Mel-Spectrogram and Tempogram Features
Viaarxiv icon

Automatic DJ Transitions with Differentiable Audio Effects and Generative Adversarial Networks

Oct 13, 2021
Bo-Yu Chen, Wei-Han Hsu, Wei-Hsiang Liao, Marco A. Martínez Ramírez, Yuki Mitsufuji, Yi-Hsuan Yang

Figure 1 for Automatic DJ Transitions with Differentiable Audio Effects and Generative Adversarial Networks
Figure 2 for Automatic DJ Transitions with Differentiable Audio Effects and Generative Adversarial Networks
Figure 3 for Automatic DJ Transitions with Differentiable Audio Effects and Generative Adversarial Networks
Figure 4 for Automatic DJ Transitions with Differentiable Audio Effects and Generative Adversarial Networks
Viaarxiv icon

KaraSinger: Score-Free Singing Voice Synthesis with VQ-VAE using Mel-spectrograms

Oct 08, 2021
Chien-Feng Liao, Jen-Yu Liu, Yi-Hsuan Yang

Figure 1 for KaraSinger: Score-Free Singing Voice Synthesis with VQ-VAE using Mel-spectrograms
Figure 2 for KaraSinger: Score-Free Singing Voice Synthesis with VQ-VAE using Mel-spectrograms
Figure 3 for KaraSinger: Score-Free Singing Voice Synthesis with VQ-VAE using Mel-spectrograms
Figure 4 for KaraSinger: Score-Free Singing Voice Synthesis with VQ-VAE using Mel-spectrograms
Viaarxiv icon

Variable-Length Music Score Infilling via XLNet and Musically Specialized Positional Encoding

Aug 11, 2021
Chin-Jui Chang, Chun-Yi Lee, Yi-Hsuan Yang

Figure 1 for Variable-Length Music Score Infilling via XLNet and Musically Specialized Positional Encoding
Figure 2 for Variable-Length Music Score Infilling via XLNet and Musically Specialized Positional Encoding
Figure 3 for Variable-Length Music Score Infilling via XLNet and Musically Specialized Positional Encoding
Figure 4 for Variable-Length Music Score Infilling via XLNet and Musically Specialized Positional Encoding
Viaarxiv icon

A Benchmarking Initiative for Audio-Domain Music Generation Using the Freesound Loop Dataset

Aug 03, 2021
Tun-Min Hung, Bo-Yu Chen, Yen-Tung Yeh, Yi-Hsuan Yang

Figure 1 for A Benchmarking Initiative for Audio-Domain Music Generation Using the Freesound Loop Dataset
Figure 2 for A Benchmarking Initiative for Audio-Domain Music Generation Using the Freesound Loop Dataset
Figure 3 for A Benchmarking Initiative for Audio-Domain Music Generation Using the Freesound Loop Dataset
Figure 4 for A Benchmarking Initiative for Audio-Domain Music Generation Using the Freesound Loop Dataset
Viaarxiv icon

EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation

Aug 03, 2021
Hsiao-Tzu Hung, Joann Ching, Seungheon Doh, Nabin Kim, Juhan Nam, Yi-Hsuan Yang

Figure 1 for EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation
Figure 2 for EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation
Figure 3 for EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation
Figure 4 for EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation
Viaarxiv icon

DadaGP: A Dataset of Tokenized GuitarPro Songs for Sequence Models

Jul 30, 2021
Pedro Sarmento, Adarsh Kumar, CJ Carr, Zack Zukowski, Mathieu Barthet, Yi-Hsuan Yang

Figure 1 for DadaGP: A Dataset of Tokenized GuitarPro Songs for Sequence Models
Figure 2 for DadaGP: A Dataset of Tokenized GuitarPro Songs for Sequence Models
Figure 3 for DadaGP: A Dataset of Tokenized GuitarPro Songs for Sequence Models
Figure 4 for DadaGP: A Dataset of Tokenized GuitarPro Songs for Sequence Models
Viaarxiv icon

MidiBERT-Piano: Large-scale Pre-training for Symbolic Music Understanding

Jul 12, 2021
Yi-Hui Chou, I-Chun Chen, Chin-Jui Chang, Joann Ching, Yi-Hsuan Yang

Figure 1 for MidiBERT-Piano: Large-scale Pre-training for Symbolic Music Understanding
Figure 2 for MidiBERT-Piano: Large-scale Pre-training for Symbolic Music Understanding
Figure 3 for MidiBERT-Piano: Large-scale Pre-training for Symbolic Music Understanding
Figure 4 for MidiBERT-Piano: Large-scale Pre-training for Symbolic Music Understanding
Viaarxiv icon