Picture for Jon Barker

Jon Barker

CLSP

Unsupervised Uncertainty Measures of Automatic Speech Recognition for Non-intrusive Speech Intelligibility Prediction

Add code
Apr 08, 2022
Figure 1 for Unsupervised Uncertainty Measures of Automatic Speech Recognition for Non-intrusive Speech Intelligibility Prediction
Figure 2 for Unsupervised Uncertainty Measures of Automatic Speech Recognition for Non-intrusive Speech Intelligibility Prediction
Figure 3 for Unsupervised Uncertainty Measures of Automatic Speech Recognition for Non-intrusive Speech Intelligibility Prediction
Figure 4 for Unsupervised Uncertainty Measures of Automatic Speech Recognition for Non-intrusive Speech Intelligibility Prediction
Viaarxiv icon

Exploiting Hidden Representations from a DNN-based Speech Recogniser for Speech Intelligibility Prediction in Hearing-impaired Listeners

Add code
Apr 08, 2022
Figure 1 for Exploiting Hidden Representations from a DNN-based Speech Recogniser for Speech Intelligibility Prediction in Hearing-impaired Listeners
Figure 2 for Exploiting Hidden Representations from a DNN-based Speech Recogniser for Speech Intelligibility Prediction in Hearing-impaired Listeners
Figure 3 for Exploiting Hidden Representations from a DNN-based Speech Recogniser for Speech Intelligibility Prediction in Hearing-impaired Listeners
Figure 4 for Exploiting Hidden Representations from a DNN-based Speech Recogniser for Speech Intelligibility Prediction in Hearing-impaired Listeners
Viaarxiv icon

Auditory-Based Data Augmentation for End-to-End Automatic Speech Recognition

Add code
Apr 08, 2022
Figure 1 for Auditory-Based Data Augmentation for End-to-End Automatic Speech Recognition
Figure 2 for Auditory-Based Data Augmentation for End-to-End Automatic Speech Recognition
Figure 3 for Auditory-Based Data Augmentation for End-to-End Automatic Speech Recognition
Figure 4 for Auditory-Based Data Augmentation for End-to-End Automatic Speech Recognition
Viaarxiv icon

Leveraging Bitstream Metadata for Fast and Accurate Video Compression Correction

Add code
Jan 31, 2022
Viaarxiv icon

Teacher-Student MixIT for Unsupervised and Semi-supervised Speech Separation

Add code
Jun 16, 2021
Figure 1 for Teacher-Student MixIT for Unsupervised and Semi-supervised Speech Separation
Figure 2 for Teacher-Student MixIT for Unsupervised and Semi-supervised Speech Separation
Figure 3 for Teacher-Student MixIT for Unsupervised and Semi-supervised Speech Separation
Figure 4 for Teacher-Student MixIT for Unsupervised and Semi-supervised Speech Separation
Viaarxiv icon

Optimising Hearing Aid Fittings for Speech in Noise with a Differentiable Hearing Loss Model

Add code
Jun 08, 2021
Figure 1 for Optimising Hearing Aid Fittings for Speech in Noise with a Differentiable Hearing Loss Model
Figure 2 for Optimising Hearing Aid Fittings for Speech in Noise with a Differentiable Hearing Loss Model
Figure 3 for Optimising Hearing Aid Fittings for Speech in Noise with a Differentiable Hearing Loss Model
Figure 4 for Optimising Hearing Aid Fittings for Speech in Noise with a Differentiable Hearing Loss Model
Viaarxiv icon

DHASP: Differentiable Hearing Aid Speech Processing

Add code
Mar 15, 2021
Figure 1 for DHASP: Differentiable Hearing Aid Speech Processing
Figure 2 for DHASP: Differentiable Hearing Aid Speech Processing
Figure 3 for DHASP: Differentiable Hearing Aid Speech Processing
Figure 4 for DHASP: Differentiable Hearing Aid Speech Processing
Viaarxiv icon

The Use of Voice Source Features for Sung Speech Recognition

Add code
Feb 23, 2021
Figure 1 for The Use of Voice Source Features for Sung Speech Recognition
Figure 2 for The Use of Voice Source Features for Sung Speech Recognition
Figure 3 for The Use of Voice Source Features for Sung Speech Recognition
Figure 4 for The Use of Voice Source Features for Sung Speech Recognition
Viaarxiv icon

Time-Domain Speech Extraction with Spatial Information and Multi Speaker Conditioning Mechanism

Add code
Feb 07, 2021
Figure 1 for Time-Domain Speech Extraction with Spatial Information and Multi Speaker Conditioning Mechanism
Figure 2 for Time-Domain Speech Extraction with Spatial Information and Multi Speaker Conditioning Mechanism
Figure 3 for Time-Domain Speech Extraction with Spatial Information and Multi Speaker Conditioning Mechanism
Figure 4 for Time-Domain Speech Extraction with Spatial Information and Multi Speaker Conditioning Mechanism
Viaarxiv icon

On End-to-end Multi-channel Time Domain Speech Separation in Reverberant Environments

Add code
Nov 11, 2020
Figure 1 for On End-to-end Multi-channel Time Domain Speech Separation in Reverberant Environments
Figure 2 for On End-to-end Multi-channel Time Domain Speech Separation in Reverberant Environments
Figure 3 for On End-to-end Multi-channel Time Domain Speech Separation in Reverberant Environments
Figure 4 for On End-to-end Multi-channel Time Domain Speech Separation in Reverberant Environments
Viaarxiv icon