music generation


Music generation is the task of generating music or music-like sounds from a model or algorithm.

MuMu-LLaMA: Multi-modal Music Understanding and Generation via Large Language Models

Add code
Dec 09, 2024
Viaarxiv icon

Detecting Machine-Generated Music with Explainability -- A Challenge and Early Benchmarks

Add code
Dec 18, 2024
Figure 1 for Detecting Machine-Generated Music with Explainability -- A Challenge and Early Benchmarks
Figure 2 for Detecting Machine-Generated Music with Explainability -- A Challenge and Early Benchmarks
Figure 3 for Detecting Machine-Generated Music with Explainability -- A Challenge and Early Benchmarks
Figure 4 for Detecting Machine-Generated Music with Explainability -- A Challenge and Early Benchmarks
Viaarxiv icon

Watermarking Training Data of Music Generation Models

Add code
Dec 12, 2024
Figure 1 for Watermarking Training Data of Music Generation Models
Figure 2 for Watermarking Training Data of Music Generation Models
Figure 3 for Watermarking Training Data of Music Generation Models
Figure 4 for Watermarking Training Data of Music Generation Models
Viaarxiv icon

CoheDancers: Enhancing Interactive Group Dance Generation through Music-Driven Coherence Decomposition

Add code
Dec 26, 2024
Figure 1 for CoheDancers: Enhancing Interactive Group Dance Generation through Music-Driven Coherence Decomposition
Figure 2 for CoheDancers: Enhancing Interactive Group Dance Generation through Music-Driven Coherence Decomposition
Figure 3 for CoheDancers: Enhancing Interactive Group Dance Generation through Music-Driven Coherence Decomposition
Figure 4 for CoheDancers: Enhancing Interactive Group Dance Generation through Music-Driven Coherence Decomposition
Viaarxiv icon

VERSA: A Versatile Evaluation Toolkit for Speech, Audio, and Music

Add code
Dec 23, 2024
Viaarxiv icon

M6: Multi-generator, Multi-domain, Multi-lingual and cultural, Multi-genres, Multi-instrument Machine-Generated Music Detection Databases

Add code
Dec 08, 2024
Viaarxiv icon

Interpreting Graphic Notation with MusicLDM: An AI Improvisation of Cornelius Cardew's Treatise

Add code
Dec 12, 2024
Figure 1 for Interpreting Graphic Notation with MusicLDM: An AI Improvisation of Cornelius Cardew's Treatise
Viaarxiv icon

Zema Dataset: A Comprehensive Study of Yaredawi Zema with a Focus on Horologium Chants

Add code
Dec 25, 2024
Figure 1 for Zema Dataset: A Comprehensive Study of Yaredawi Zema with a Focus on Horologium Chants
Figure 2 for Zema Dataset: A Comprehensive Study of Yaredawi Zema with a Focus on Horologium Chants
Figure 3 for Zema Dataset: A Comprehensive Study of Yaredawi Zema with a Focus on Horologium Chants
Figure 4 for Zema Dataset: A Comprehensive Study of Yaredawi Zema with a Focus on Horologium Chants
Viaarxiv icon

Meta Audiobox Aesthetics: Unified Automatic Quality Assessment for Speech, Music, and Sound

Add code
Feb 07, 2025
Figure 1 for Meta Audiobox Aesthetics: Unified Automatic Quality Assessment for Speech, Music, and Sound
Figure 2 for Meta Audiobox Aesthetics: Unified Automatic Quality Assessment for Speech, Music, and Sound
Figure 3 for Meta Audiobox Aesthetics: Unified Automatic Quality Assessment for Speech, Music, and Sound
Figure 4 for Meta Audiobox Aesthetics: Unified Automatic Quality Assessment for Speech, Music, and Sound
Viaarxiv icon

A2SB: Audio-to-Audio Schrodinger Bridges

Add code
Jan 20, 2025
Viaarxiv icon