Alert button
Picture for Sebastian Ewert

Sebastian Ewert

Alert button

Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages

Add code
Bookmark button
Alert button
Jun 13, 2023
Simon Durand, Daniel Stoller, Sebastian Ewert

Figure 1 for Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages
Figure 2 for Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages
Figure 3 for Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages
Figure 4 for Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages
Viaarxiv icon

Towards Robust Unsupervised Disentanglement of Sequential Data -- A Case Study Using Music Audio

Add code
Bookmark button
Alert button
May 12, 2022
Yin-Jyun Luo, Sebastian Ewert, Simon Dixon

Figure 1 for Towards Robust Unsupervised Disentanglement of Sequential Data -- A Case Study Using Music Audio
Figure 2 for Towards Robust Unsupervised Disentanglement of Sequential Data -- A Case Study Using Music Audio
Figure 3 for Towards Robust Unsupervised Disentanglement of Sequential Data -- A Case Study Using Music Audio
Figure 4 for Towards Robust Unsupervised Disentanglement of Sequential Data -- A Case Study Using Music Audio
Viaarxiv icon

A Lightweight Instrument-Agnostic Model for Polyphonic Note Transcription and Multipitch Estimation

Add code
Bookmark button
Alert button
Mar 18, 2022
Rachel M. Bittner, Juan José Bosch, David Rubinstein, Gabriel Meseguer-Brocal, Sebastian Ewert

Figure 1 for A Lightweight Instrument-Agnostic Model for Polyphonic Note Transcription and Multipitch Estimation
Figure 2 for A Lightweight Instrument-Agnostic Model for Polyphonic Note Transcription and Multipitch Estimation
Figure 3 for A Lightweight Instrument-Agnostic Model for Polyphonic Note Transcription and Multipitch Estimation
Figure 4 for A Lightweight Instrument-Agnostic Model for Polyphonic Note Transcription and Multipitch Estimation
Viaarxiv icon

Improving Lyrics Alignment through Joint Pitch Detection

Add code
Bookmark button
Alert button
Feb 03, 2022
Jiawen Huang, Emmanouil Benetos, Sebastian Ewert

Figure 1 for Improving Lyrics Alignment through Joint Pitch Detection
Figure 2 for Improving Lyrics Alignment through Joint Pitch Detection
Figure 3 for Improving Lyrics Alignment through Joint Pitch Detection
Figure 4 for Improving Lyrics Alignment through Joint Pitch Detection
Viaarxiv icon

Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling

Add code
Bookmark button
Alert button
Nov 14, 2019
Daniel Stoller, Mi Tian, Sebastian Ewert, Simon Dixon

Figure 1 for Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling
Figure 2 for Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling
Figure 3 for Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling
Figure 4 for Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling
Viaarxiv icon

Training Generative Adversarial Networks from Incomplete Observations using Factorised Discriminators

Add code
Bookmark button
Alert button
May 29, 2019
Daniel Stoller, Sebastian Ewert, Simon Dixon

Figure 1 for Training Generative Adversarial Networks from Incomplete Observations using Factorised Discriminators
Figure 2 for Training Generative Adversarial Networks from Incomplete Observations using Factorised Discriminators
Figure 3 for Training Generative Adversarial Networks from Incomplete Observations using Factorised Discriminators
Figure 4 for Training Generative Adversarial Networks from Incomplete Observations using Factorised Discriminators
Viaarxiv icon

End-to-end Lyrics Alignment for Polyphonic Music Using an Audio-to-Character Recognition Model

Add code
Bookmark button
Alert button
Feb 18, 2019
Daniel Stoller, Simon Durand, Sebastian Ewert

Figure 1 for End-to-end Lyrics Alignment for Polyphonic Music Using an Audio-to-Character Recognition Model
Figure 2 for End-to-end Lyrics Alignment for Polyphonic Music Using an Audio-to-Character Recognition Model
Figure 3 for End-to-end Lyrics Alignment for Polyphonic Music Using an Audio-to-Character Recognition Model
Figure 4 for End-to-end Lyrics Alignment for Polyphonic Music Using an Audio-to-Character Recognition Model
Viaarxiv icon

Wave-U-Net: A Multi-Scale Neural Network for End-to-End Audio Source Separation

Add code
Bookmark button
Alert button
Jun 08, 2018
Daniel Stoller, Sebastian Ewert, Simon Dixon

Figure 1 for Wave-U-Net: A Multi-Scale Neural Network for End-to-End Audio Source Separation
Figure 2 for Wave-U-Net: A Multi-Scale Neural Network for End-to-End Audio Source Separation
Figure 3 for Wave-U-Net: A Multi-Scale Neural Network for End-to-End Audio Source Separation
Figure 4 for Wave-U-Net: A Multi-Scale Neural Network for End-to-End Audio Source Separation
Viaarxiv icon

Adversarial Semi-Supervised Audio Source Separation applied to Singing Voice Extraction

Add code
Bookmark button
Alert button
Apr 06, 2018
Daniel Stoller, Sebastian Ewert, Simon Dixon

Figure 1 for Adversarial Semi-Supervised Audio Source Separation applied to Singing Voice Extraction
Figure 2 for Adversarial Semi-Supervised Audio Source Separation applied to Singing Voice Extraction
Figure 3 for Adversarial Semi-Supervised Audio Source Separation applied to Singing Voice Extraction
Viaarxiv icon

Jointly Detecting and Separating Singing Voice: A Multi-Task Approach

Add code
Bookmark button
Alert button
Apr 05, 2018
Daniel Stoller, Sebastian Ewert, Simon Dixon

Figure 1 for Jointly Detecting and Separating Singing Voice: A Multi-Task Approach
Figure 2 for Jointly Detecting and Separating Singing Voice: A Multi-Task Approach
Figure 3 for Jointly Detecting and Separating Singing Voice: A Multi-Task Approach
Viaarxiv icon