Alert button
Picture for Oliver Watts

Oliver Watts

Alert button

Performance of data-driven inner speech decoding with same-task EEG-fMRI data fusion and bimodal models

Add code
Bookmark button
Alert button
Jun 19, 2023
Holly Wilson, Scott Wellington, Foteini Simistira Liwicki, Vibha Gupta, Rajkumar Saini, Kanjar De, Nosheen Abid, Sumit Rakesh, Johan Eriksson, Oliver Watts, Xi Chen, Mohammad Golbabaee, Michael J. Proulx, Marcus Liwicki, Eamonn O'Neill, Benjamin Metcalfe

Figure 1 for Performance of data-driven inner speech decoding with same-task EEG-fMRI data fusion and bimodal models
Figure 2 for Performance of data-driven inner speech decoding with same-task EEG-fMRI data fusion and bimodal models
Figure 3 for Performance of data-driven inner speech decoding with same-task EEG-fMRI data fusion and bimodal models
Figure 4 for Performance of data-driven inner speech decoding with same-task EEG-fMRI data fusion and bimodal models
Viaarxiv icon

Puffin: pitch-synchronous neural waveform generation for fullband speech on modest devices

Add code
Bookmark button
Alert button
Nov 25, 2022
Oliver Watts, Lovisa Wihlborg, Cassia Valentini-Botinhao

Figure 1 for Puffin: pitch-synchronous neural waveform generation for fullband speech on modest devices
Figure 2 for Puffin: pitch-synchronous neural waveform generation for fullband speech on modest devices
Figure 3 for Puffin: pitch-synchronous neural waveform generation for fullband speech on modest devices
Figure 4 for Puffin: pitch-synchronous neural waveform generation for fullband speech on modest devices
Viaarxiv icon

Predicting pairwise preferences between TTS audio stimuli using parallel ratings data and anti-symmetric twin neural networks

Add code
Bookmark button
Alert button
Sep 22, 2022
Cassia Valentini-Botinhao, Manuel Sam Ribeiro, Oliver Watts, Korin Richmond, Gustav Eje Henter

Figure 1 for Predicting pairwise preferences between TTS audio stimuli using parallel ratings data and anti-symmetric twin neural networks
Figure 2 for Predicting pairwise preferences between TTS audio stimuli using parallel ratings data and anti-symmetric twin neural networks
Figure 3 for Predicting pairwise preferences between TTS audio stimuli using parallel ratings data and anti-symmetric twin neural networks
Figure 4 for Predicting pairwise preferences between TTS audio stimuli using parallel ratings data and anti-symmetric twin neural networks
Viaarxiv icon

Using generative modelling to produce varied intonation for speech synthesis

Add code
Bookmark button
Alert button
Jun 10, 2019
Zack Hodari, Oliver Watts, Simon King

Figure 1 for Using generative modelling to produce varied intonation for speech synthesis
Figure 2 for Using generative modelling to produce varied intonation for speech synthesis
Figure 3 for Using generative modelling to produce varied intonation for speech synthesis
Figure 4 for Using generative modelling to produce varied intonation for speech synthesis
Viaarxiv icon

Median-Based Generation of Synthetic Speech Durations using a Non-Parametric Approach

Add code
Bookmark button
Alert button
Nov 11, 2016
Srikanth Ronanki, Oliver Watts, Simon King, Gustav Eje Henter

Figure 1 for Median-Based Generation of Synthetic Speech Durations using a Non-Parametric Approach
Figure 2 for Median-Based Generation of Synthetic Speech Durations using a Non-Parametric Approach
Figure 3 for Median-Based Generation of Synthetic Speech Durations using a Non-Parametric Approach
Figure 4 for Median-Based Generation of Synthetic Speech Durations using a Non-Parametric Approach
Viaarxiv icon