Alert button
Picture for Daniel Stoller

Daniel Stoller

Alert button

LLark: A Multimodal Foundation Model for Music

Add code
Bookmark button
Alert button
Oct 11, 2023
Josh Gardner, Simon Durand, Daniel Stoller, Rachel M. Bittner

Viaarxiv icon

Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages

Add code
Bookmark button
Alert button
Jun 13, 2023
Simon Durand, Daniel Stoller, Sebastian Ewert

Figure 1 for Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages
Figure 2 for Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages
Figure 3 for Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages
Figure 4 for Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages
Viaarxiv icon

Few-Shot Musical Source Separation

Add code
Bookmark button
Alert button
May 03, 2022
Yu Wang, Daniel Stoller, Rachel M. Bittner, Juan Pablo Bello

Figure 1 for Few-Shot Musical Source Separation
Figure 2 for Few-Shot Musical Source Separation
Figure 3 for Few-Shot Musical Source Separation
Figure 4 for Few-Shot Musical Source Separation
Viaarxiv icon

Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling

Add code
Bookmark button
Alert button
Nov 14, 2019
Daniel Stoller, Mi Tian, Sebastian Ewert, Simon Dixon

Figure 1 for Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling
Figure 2 for Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling
Figure 3 for Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling
Figure 4 for Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling
Viaarxiv icon

Training Generative Adversarial Networks from Incomplete Observations using Factorised Discriminators

Add code
Bookmark button
Alert button
May 29, 2019
Daniel Stoller, Sebastian Ewert, Simon Dixon

Figure 1 for Training Generative Adversarial Networks from Incomplete Observations using Factorised Discriminators
Figure 2 for Training Generative Adversarial Networks from Incomplete Observations using Factorised Discriminators
Figure 3 for Training Generative Adversarial Networks from Incomplete Observations using Factorised Discriminators
Figure 4 for Training Generative Adversarial Networks from Incomplete Observations using Factorised Discriminators
Viaarxiv icon

GAN-based Generation and Automatic Selection of Explanations for Neural Networks

Add code
Bookmark button
Alert button
Apr 27, 2019
Saumitra Mishra, Daniel Stoller, Emmanouil Benetos, Bob L. Sturm, Simon Dixon

Figure 1 for GAN-based Generation and Automatic Selection of Explanations for Neural Networks
Figure 2 for GAN-based Generation and Automatic Selection of Explanations for Neural Networks
Figure 3 for GAN-based Generation and Automatic Selection of Explanations for Neural Networks
Figure 4 for GAN-based Generation and Automatic Selection of Explanations for Neural Networks
Viaarxiv icon

End-to-end Lyrics Alignment for Polyphonic Music Using an Audio-to-Character Recognition Model

Add code
Bookmark button
Alert button
Feb 18, 2019
Daniel Stoller, Simon Durand, Sebastian Ewert

Figure 1 for End-to-end Lyrics Alignment for Polyphonic Music Using an Audio-to-Character Recognition Model
Figure 2 for End-to-end Lyrics Alignment for Polyphonic Music Using an Audio-to-Character Recognition Model
Figure 3 for End-to-end Lyrics Alignment for Polyphonic Music Using an Audio-to-Character Recognition Model
Figure 4 for End-to-end Lyrics Alignment for Polyphonic Music Using an Audio-to-Character Recognition Model
Viaarxiv icon

Wave-U-Net: A Multi-Scale Neural Network for End-to-End Audio Source Separation

Add code
Bookmark button
Alert button
Jun 08, 2018
Daniel Stoller, Sebastian Ewert, Simon Dixon

Figure 1 for Wave-U-Net: A Multi-Scale Neural Network for End-to-End Audio Source Separation
Figure 2 for Wave-U-Net: A Multi-Scale Neural Network for End-to-End Audio Source Separation
Figure 3 for Wave-U-Net: A Multi-Scale Neural Network for End-to-End Audio Source Separation
Figure 4 for Wave-U-Net: A Multi-Scale Neural Network for End-to-End Audio Source Separation
Viaarxiv icon

Adversarial Semi-Supervised Audio Source Separation applied to Singing Voice Extraction

Add code
Bookmark button
Alert button
Apr 06, 2018
Daniel Stoller, Sebastian Ewert, Simon Dixon

Figure 1 for Adversarial Semi-Supervised Audio Source Separation applied to Singing Voice Extraction
Figure 2 for Adversarial Semi-Supervised Audio Source Separation applied to Singing Voice Extraction
Figure 3 for Adversarial Semi-Supervised Audio Source Separation applied to Singing Voice Extraction
Viaarxiv icon

Jointly Detecting and Separating Singing Voice: A Multi-Task Approach

Add code
Bookmark button
Alert button
Apr 05, 2018
Daniel Stoller, Sebastian Ewert, Simon Dixon

Figure 1 for Jointly Detecting and Separating Singing Voice: A Multi-Task Approach
Figure 2 for Jointly Detecting and Separating Singing Voice: A Multi-Task Approach
Figure 3 for Jointly Detecting and Separating Singing Voice: A Multi-Task Approach
Viaarxiv icon