music generation


Music generation is the task of generating music or music-like sounds from a model or algorithm.

Aliasing-Free Neural Audio Synthesis

Add code
Dec 23, 2025
Figure 1 for Aliasing-Free Neural Audio Synthesis
Figure 2 for Aliasing-Free Neural Audio Synthesis
Figure 3 for Aliasing-Free Neural Audio Synthesis
Figure 4 for Aliasing-Free Neural Audio Synthesis
Viaarxiv icon

Aligning Generative Music AI with Human Preferences: Methods and Challenges

Add code
Nov 19, 2025
Viaarxiv icon

Efficient Optimization of Hierarchical Identifiers for Generative Recommendation

Add code
Dec 20, 2025
Viaarxiv icon

Emovectors: assessing emotional content in jazz improvisations for creativity evaluation

Add code
Dec 09, 2025
Viaarxiv icon

Generating Piano Music with Transformers: A Comparative Study of Scale, Data, and Metrics

Add code
Nov 10, 2025
Viaarxiv icon

Memo2496: Expert-Annotated Dataset and Dual-View Adaptive Framework for Music Emotion Recognition

Add code
Dec 17, 2025
Figure 1 for Memo2496: Expert-Annotated Dataset and Dual-View Adaptive Framework for Music Emotion Recognition
Figure 2 for Memo2496: Expert-Annotated Dataset and Dual-View Adaptive Framework for Music Emotion Recognition
Figure 3 for Memo2496: Expert-Annotated Dataset and Dual-View Adaptive Framework for Music Emotion Recognition
Figure 4 for Memo2496: Expert-Annotated Dataset and Dual-View Adaptive Framework for Music Emotion Recognition
Viaarxiv icon

Diff-V2M: A Hierarchical Conditional Diffusion Model with Explicit Rhythmic Modeling for Video-to-Music Generation

Add code
Nov 12, 2025
Figure 1 for Diff-V2M: A Hierarchical Conditional Diffusion Model with Explicit Rhythmic Modeling for Video-to-Music Generation
Figure 2 for Diff-V2M: A Hierarchical Conditional Diffusion Model with Explicit Rhythmic Modeling for Video-to-Music Generation
Figure 3 for Diff-V2M: A Hierarchical Conditional Diffusion Model with Explicit Rhythmic Modeling for Video-to-Music Generation
Figure 4 for Diff-V2M: A Hierarchical Conditional Diffusion Model with Explicit Rhythmic Modeling for Video-to-Music Generation
Viaarxiv icon

Melodia: Training-Free Music Editing Guided by Attention Probing in Diffusion Models

Add code
Nov 18, 2025
Figure 1 for Melodia: Training-Free Music Editing Guided by Attention Probing in Diffusion Models
Figure 2 for Melodia: Training-Free Music Editing Guided by Attention Probing in Diffusion Models
Figure 3 for Melodia: Training-Free Music Editing Guided by Attention Probing in Diffusion Models
Figure 4 for Melodia: Training-Free Music Editing Guided by Attention Probing in Diffusion Models
Viaarxiv icon

On the Joint Minimization of Regularization Loss Functions in Deep Variational Bayesian Methods for Attribute-Controlled Symbolic Music Generation

Add code
Nov 10, 2025
Viaarxiv icon

Conditional Diffusion as Latent Constraints for Controllable Symbolic Music Generation

Add code
Nov 10, 2025
Figure 1 for Conditional Diffusion as Latent Constraints for Controllable Symbolic Music Generation
Figure 2 for Conditional Diffusion as Latent Constraints for Controllable Symbolic Music Generation
Figure 3 for Conditional Diffusion as Latent Constraints for Controllable Symbolic Music Generation
Figure 4 for Conditional Diffusion as Latent Constraints for Controllable Symbolic Music Generation
Viaarxiv icon