Alert button

"music": models, code, and papers
Alert button

Instrument Separation of Symbolic Music by Explicitly Guided Diffusion Model

Add code
Bookmark button
Alert button
Sep 05, 2022
Sangjun Han, Hyeongrae Ihm, DaeHan Ahn, Woohyung Lim

Figure 1 for Instrument Separation of Symbolic Music by Explicitly Guided Diffusion Model
Figure 2 for Instrument Separation of Symbolic Music by Explicitly Guided Diffusion Model
Figure 3 for Instrument Separation of Symbolic Music by Explicitly Guided Diffusion Model
Viaarxiv icon

Benchmarks and leaderboards for sound demixing tasks

Add code
Bookmark button
Alert button
May 12, 2023
Roman Solovyev, Alexander Stempkovskiy, Tatiana Habruseva

Figure 1 for Benchmarks and leaderboards for sound demixing tasks
Figure 2 for Benchmarks and leaderboards for sound demixing tasks
Figure 3 for Benchmarks and leaderboards for sound demixing tasks
Figure 4 for Benchmarks and leaderboards for sound demixing tasks
Viaarxiv icon

Low-Resource Music Genre Classification with Advanced Neural Model Reprogramming

Add code
Bookmark button
Alert button
Nov 02, 2022
Yun-Ning Hung, Chao-Han Huck Yang, Pin-Yu Chen, Alexander Lerch

Figure 1 for Low-Resource Music Genre Classification with Advanced Neural Model Reprogramming
Figure 2 for Low-Resource Music Genre Classification with Advanced Neural Model Reprogramming
Figure 3 for Low-Resource Music Genre Classification with Advanced Neural Model Reprogramming
Figure 4 for Low-Resource Music Genre Classification with Advanced Neural Model Reprogramming
Viaarxiv icon

An interactive music infilling interface for pop music composition

Add code
Bookmark button
Alert button
Mar 23, 2022
Rui Guo

Figure 1 for An interactive music infilling interface for pop music composition
Figure 2 for An interactive music infilling interface for pop music composition
Figure 3 for An interactive music infilling interface for pop music composition
Viaarxiv icon

DiffRoll: Diffusion-based Generative Music Transcription with Unsupervised Pretraining Capability

Add code
Bookmark button
Alert button
Oct 11, 2022
Kin Wai Cheuk, Ryosuke Sawata, Toshimitsu Uesaka, Naoki Murata, Naoya Takahashi, Shusuke Takahashi, Dorien Herremans, Yuki Mitsufuji

Figure 1 for DiffRoll: Diffusion-based Generative Music Transcription with Unsupervised Pretraining Capability
Figure 2 for DiffRoll: Diffusion-based Generative Music Transcription with Unsupervised Pretraining Capability
Figure 3 for DiffRoll: Diffusion-based Generative Music Transcription with Unsupervised Pretraining Capability
Figure 4 for DiffRoll: Diffusion-based Generative Music Transcription with Unsupervised Pretraining Capability
Viaarxiv icon

MF-PAM: Accurate Pitch Estimation through Periodicity Analysis and Multi-level Feature Fusion

Jun 16, 2023
Woo-Jin Chung, Doyeon Kim, Soo-Whan Chung, Hong-Goo Kang

Figure 1 for MF-PAM: Accurate Pitch Estimation through Periodicity Analysis and Multi-level Feature Fusion
Figure 2 for MF-PAM: Accurate Pitch Estimation through Periodicity Analysis and Multi-level Feature Fusion
Figure 3 for MF-PAM: Accurate Pitch Estimation through Periodicity Analysis and Multi-level Feature Fusion
Figure 4 for MF-PAM: Accurate Pitch Estimation through Periodicity Analysis and Multi-level Feature Fusion
Viaarxiv icon

Pipeline for recording datasets and running neural networks on the Bela embedded hardware platform

Add code
Bookmark button
Alert button
Jun 20, 2023
Teresa Pelinski, Rodrigo Diaz, Adán L. Benito Temprano, Andrew McPherson

Figure 1 for Pipeline for recording datasets and running neural networks on the Bela embedded hardware platform
Figure 2 for Pipeline for recording datasets and running neural networks on the Bela embedded hardware platform
Figure 3 for Pipeline for recording datasets and running neural networks on the Bela embedded hardware platform
Figure 4 for Pipeline for recording datasets and running neural networks on the Bela embedded hardware platform
Viaarxiv icon

Conditional variational autoencoder to improve neural audio synthesis for polyphonic music sound

Nov 16, 2022
Seokjin Lee, Minhan Kim, Seunghyeon Shin, Daeho Lee, Inseon Jang, Wootaek Lim

Figure 1 for Conditional variational autoencoder to improve neural audio synthesis for polyphonic music sound
Figure 2 for Conditional variational autoencoder to improve neural audio synthesis for polyphonic music sound
Figure 3 for Conditional variational autoencoder to improve neural audio synthesis for polyphonic music sound
Figure 4 for Conditional variational autoencoder to improve neural audio synthesis for polyphonic music sound
Viaarxiv icon

Self-Supervised Learning of Music-Dance Representation through Explicit-Implicit Rhythm Synchronization

Jul 07, 2022
Jiashuo Yu, Junfu Pu, Ying Cheng, Rui Feng, Ying Shan

Figure 1 for Self-Supervised Learning of Music-Dance Representation through Explicit-Implicit Rhythm Synchronization
Figure 2 for Self-Supervised Learning of Music-Dance Representation through Explicit-Implicit Rhythm Synchronization
Figure 3 for Self-Supervised Learning of Music-Dance Representation through Explicit-Implicit Rhythm Synchronization
Figure 4 for Self-Supervised Learning of Music-Dance Representation through Explicit-Implicit Rhythm Synchronization
Viaarxiv icon

Bi-Sampling Approach to Classify Music Mood leveraging Raga-Rasa Association in Indian Classical Music

Mar 13, 2022
Mohan Rao B C, Vinayak Arkachaari, Harsha M N, Sushmitha M N, Gayathri Ramesh K K, Ullas M S, Pathi Mohan Rao, Sudha G, Narayana Darapaneni

Figure 1 for Bi-Sampling Approach to Classify Music Mood leveraging Raga-Rasa Association in Indian Classical Music
Figure 2 for Bi-Sampling Approach to Classify Music Mood leveraging Raga-Rasa Association in Indian Classical Music
Figure 3 for Bi-Sampling Approach to Classify Music Mood leveraging Raga-Rasa Association in Indian Classical Music
Figure 4 for Bi-Sampling Approach to Classify Music Mood leveraging Raga-Rasa Association in Indian Classical Music
Viaarxiv icon