Alert button

"music generation": models, code, and papers
Alert button

Polyffusion: A Diffusion Model for Polyphonic Score Generation with Internal and External Controls

Add code
Bookmark button
Alert button
Jul 19, 2023
Lejun Min, Junyan Jiang, Gus Xia, Jingwei Zhao

Figure 1 for Polyffusion: A Diffusion Model for Polyphonic Score Generation with Internal and External Controls
Figure 2 for Polyffusion: A Diffusion Model for Polyphonic Score Generation with Internal and External Controls
Figure 3 for Polyffusion: A Diffusion Model for Polyphonic Score Generation with Internal and External Controls
Figure 4 for Polyffusion: A Diffusion Model for Polyphonic Score Generation with Internal and External Controls
Viaarxiv icon

Discrete Diffusion Probabilistic Models for Symbolic Music Generation

Add code
Bookmark button
Alert button
May 16, 2023
Matthias Plasser, Silvan Peter, Gerhard Widmer

Figure 1 for Discrete Diffusion Probabilistic Models for Symbolic Music Generation
Figure 2 for Discrete Diffusion Probabilistic Models for Symbolic Music Generation
Figure 3 for Discrete Diffusion Probabilistic Models for Symbolic Music Generation
Figure 4 for Discrete Diffusion Probabilistic Models for Symbolic Music Generation
Viaarxiv icon

Emotion-Guided Music Accompaniment Generation Based on Variational Autoencoder

Add code
Bookmark button
Alert button
Jul 08, 2023
Qi Wang, Shubing Zhang, Li Zhou

Figure 1 for Emotion-Guided Music Accompaniment Generation Based on Variational Autoencoder
Figure 2 for Emotion-Guided Music Accompaniment Generation Based on Variational Autoencoder
Figure 3 for Emotion-Guided Music Accompaniment Generation Based on Variational Autoencoder
Figure 4 for Emotion-Guided Music Accompaniment Generation Based on Variational Autoencoder
Viaarxiv icon

Museformer: Transformer with Fine- and Coarse-Grained Attention for Music Generation

Add code
Bookmark button
Alert button
Oct 19, 2022
Botao Yu, Peiling Lu, Rui Wang, Wei Hu, Xu Tan, Wei Ye, Shikun Zhang, Tao Qin, Tie-Yan Liu

Figure 1 for Museformer: Transformer with Fine- and Coarse-Grained Attention for Music Generation
Figure 2 for Museformer: Transformer with Fine- and Coarse-Grained Attention for Music Generation
Figure 3 for Museformer: Transformer with Fine- and Coarse-Grained Attention for Music Generation
Figure 4 for Museformer: Transformer with Fine- and Coarse-Grained Attention for Music Generation
Viaarxiv icon

MusicAgent: An AI Agent for Music Understanding and Generation with Large Language Models

Add code
Bookmark button
Alert button
Oct 25, 2023
Dingyao Yu, Kaitao Song, Peiling Lu, Tianyu He, Xu Tan, Wei Ye, Shikun Zhang, Jiang Bian

Figure 1 for MusicAgent: An AI Agent for Music Understanding and Generation with Large Language Models
Figure 2 for MusicAgent: An AI Agent for Music Understanding and Generation with Large Language Models
Figure 3 for MusicAgent: An AI Agent for Music Understanding and Generation with Large Language Models
Figure 4 for MusicAgent: An AI Agent for Music Understanding and Generation with Large Language Models
Viaarxiv icon

HumTrans: A Novel Open-Source Dataset for Humming Melody Transcription and Beyond

Add code
Bookmark button
Alert button
Sep 18, 2023
Shansong Liu, Xu Li, Dian Li, Ying Shan

Figure 1 for HumTrans: A Novel Open-Source Dataset for Humming Melody Transcription and Beyond
Figure 2 for HumTrans: A Novel Open-Source Dataset for Humming Melody Transcription and Beyond
Figure 3 for HumTrans: A Novel Open-Source Dataset for Humming Melody Transcription and Beyond
Figure 4 for HumTrans: A Novel Open-Source Dataset for Humming Melody Transcription and Beyond
Viaarxiv icon

Music Representing Corpus Virtual: An Open Sourced Library for Explorative Music Generation, Sound Design, and Instrument Creation with Artificial Intelligence and Machine Learning

May 24, 2023
Christopher Johann Clarke

Figure 1 for Music Representing Corpus Virtual: An Open Sourced Library for Explorative Music Generation, Sound Design, and Instrument Creation with Artificial Intelligence and Machine Learning
Figure 2 for Music Representing Corpus Virtual: An Open Sourced Library for Explorative Music Generation, Sound Design, and Instrument Creation with Artificial Intelligence and Machine Learning
Figure 3 for Music Representing Corpus Virtual: An Open Sourced Library for Explorative Music Generation, Sound Design, and Instrument Creation with Artificial Intelligence and Machine Learning
Figure 4 for Music Representing Corpus Virtual: An Open Sourced Library for Explorative Music Generation, Sound Design, and Instrument Creation with Artificial Intelligence and Machine Learning
Viaarxiv icon

ERNIE-Music: Text-to-Waveform Music Generation with Diffusion Models

Add code
Bookmark button
Alert button
Feb 09, 2023
Pengfei Zhu, Chao Pang, Shuohuan Wang, Yekun Chai, Yu Sun, Hao Tian, Hua Wu

Figure 1 for ERNIE-Music: Text-to-Waveform Music Generation with Diffusion Models
Figure 2 for ERNIE-Music: Text-to-Waveform Music Generation with Diffusion Models
Figure 3 for ERNIE-Music: Text-to-Waveform Music Generation with Diffusion Models
Figure 4 for ERNIE-Music: Text-to-Waveform Music Generation with Diffusion Models
Viaarxiv icon

A Unified Framework for Multimodal, Multi-Part Human Motion Synthesis

Nov 28, 2023
Zixiang Zhou, Yu Wan, Baoyuan Wang

Viaarxiv icon

MusicLDM: Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies

Add code
Bookmark button
Alert button
Aug 03, 2023
Ke Chen, Yusong Wu, Haohe Liu, Marianna Nezhurina, Taylor Berg-Kirkpatrick, Shlomo Dubnov

Figure 1 for MusicLDM: Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies
Figure 2 for MusicLDM: Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies
Figure 3 for MusicLDM: Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies
Figure 4 for MusicLDM: Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies
Viaarxiv icon