Picture for Yi-Hsuan Yang

Yi-Hsuan Yang

NTU

Improving Unsupervised Clean-to-Rendered Guitar Tone Transformation Using GANs and Integrated Unaligned Clean Data

Add code
Jun 22, 2024
Figure 1 for Improving Unsupervised Clean-to-Rendered Guitar Tone Transformation Using GANs and Integrated Unaligned Clean Data
Figure 2 for Improving Unsupervised Clean-to-Rendered Guitar Tone Transformation Using GANs and Integrated Unaligned Clean Data
Figure 3 for Improving Unsupervised Clean-to-Rendered Guitar Tone Transformation Using GANs and Integrated Unaligned Clean Data
Figure 4 for Improving Unsupervised Clean-to-Rendered Guitar Tone Transformation Using GANs and Integrated Unaligned Clean Data
Viaarxiv icon

Model-Based Deep Learning for Music Information Research

Add code
Jun 17, 2024
Figure 1 for Model-Based Deep Learning for Music Information Research
Figure 2 for Model-Based Deep Learning for Music Information Research
Figure 3 for Model-Based Deep Learning for Music Information Research
Figure 4 for Model-Based Deep Learning for Music Information Research
Viaarxiv icon

Local Periodicity-Based Beat Tracking for Expressive Classical Piano Music

Add code
Aug 20, 2023
Figure 1 for Local Periodicity-Based Beat Tracking for Expressive Classical Piano Music
Figure 2 for Local Periodicity-Based Beat Tracking for Expressive Classical Piano Music
Figure 3 for Local Periodicity-Based Beat Tracking for Expressive Classical Piano Music
Figure 4 for Local Periodicity-Based Beat Tracking for Expressive Classical Piano Music
Viaarxiv icon

An Analysis Method for Metric-Level Switching in Beat Tracking

Add code
Oct 13, 2022
Figure 1 for An Analysis Method for Metric-Level Switching in Beat Tracking
Figure 2 for An Analysis Method for Metric-Level Switching in Beat Tracking
Figure 3 for An Analysis Method for Metric-Level Switching in Beat Tracking
Figure 4 for An Analysis Method for Metric-Level Switching in Beat Tracking
Viaarxiv icon

JukeDrummer: Conditional Beat-aware Audio-domain Drum Accompaniment Generation via Transformer VQ-VA

Add code
Oct 12, 2022
Figure 1 for JukeDrummer: Conditional Beat-aware Audio-domain Drum Accompaniment Generation via Transformer VQ-VA
Figure 2 for JukeDrummer: Conditional Beat-aware Audio-domain Drum Accompaniment Generation via Transformer VQ-VA
Figure 3 for JukeDrummer: Conditional Beat-aware Audio-domain Drum Accompaniment Generation via Transformer VQ-VA
Figure 4 for JukeDrummer: Conditional Beat-aware Audio-domain Drum Accompaniment Generation via Transformer VQ-VA
Viaarxiv icon

Melody Infilling with User-Provided Structural Context

Add code
Oct 06, 2022
Figure 1 for Melody Infilling with User-Provided Structural Context
Figure 2 for Melody Infilling with User-Provided Structural Context
Figure 3 for Melody Infilling with User-Provided Structural Context
Figure 4 for Melody Infilling with User-Provided Structural Context
Viaarxiv icon

Compose & Embellish: Well-Structured Piano Performance Generation via A Two-Stage Approach

Add code
Sep 17, 2022
Figure 1 for Compose & Embellish: Well-Structured Piano Performance Generation via A Two-Stage Approach
Figure 2 for Compose & Embellish: Well-Structured Piano Performance Generation via A Two-Stage Approach
Figure 3 for Compose & Embellish: Well-Structured Piano Performance Generation via A Two-Stage Approach
Figure 4 for Compose & Embellish: Well-Structured Piano Performance Generation via A Two-Stage Approach
Viaarxiv icon

Exploiting Pre-trained Feature Networks for Generative Adversarial Networks in Audio-domain Loop Generation

Add code
Sep 05, 2022
Figure 1 for Exploiting Pre-trained Feature Networks for Generative Adversarial Networks in Audio-domain Loop Generation
Figure 2 for Exploiting Pre-trained Feature Networks for Generative Adversarial Networks in Audio-domain Loop Generation
Figure 3 for Exploiting Pre-trained Feature Networks for Generative Adversarial Networks in Audio-domain Loop Generation
Figure 4 for Exploiting Pre-trained Feature Networks for Generative Adversarial Networks in Audio-domain Loop Generation
Viaarxiv icon

DDSP-based Singing Vocoders: A New Subtractive-based Synthesizer and A Comprehensive Evaluation

Add code
Aug 19, 2022
Figure 1 for DDSP-based Singing Vocoders: A New Subtractive-based Synthesizer and A Comprehensive Evaluation
Figure 2 for DDSP-based Singing Vocoders: A New Subtractive-based Synthesizer and A Comprehensive Evaluation
Figure 3 for DDSP-based Singing Vocoders: A New Subtractive-based Synthesizer and A Comprehensive Evaluation
Figure 4 for DDSP-based Singing Vocoders: A New Subtractive-based Synthesizer and A Comprehensive Evaluation
Viaarxiv icon

towards automatic transcription of polyphonic electric guitar music:a new dataset and a multi-loss transformer model

Add code
Feb 20, 2022
Figure 1 for towards automatic transcription of polyphonic electric guitar music:a new dataset and a multi-loss transformer model
Figure 2 for towards automatic transcription of polyphonic electric guitar music:a new dataset and a multi-loss transformer model
Figure 3 for towards automatic transcription of polyphonic electric guitar music:a new dataset and a multi-loss transformer model
Figure 4 for towards automatic transcription of polyphonic electric guitar music:a new dataset and a multi-loss transformer model
Viaarxiv icon