Alert button
Picture for Aciel Eshky

Aciel Eshky

Alert button

Automatic audiovisual synchronisation for ultrasound tongue imaging

May 31, 2021
Aciel Eshky, Joanne Cleland, Manuel Sam Ribeiro, Eleanor Sugden, Korin Richmond, Steve Renals

Figure 1 for Automatic audiovisual synchronisation for ultrasound tongue imaging
Figure 2 for Automatic audiovisual synchronisation for ultrasound tongue imaging
Figure 3 for Automatic audiovisual synchronisation for ultrasound tongue imaging
Figure 4 for Automatic audiovisual synchronisation for ultrasound tongue imaging
Viaarxiv icon

Silent versus modal multi-speaker speech recognition from ultrasound and video

Feb 27, 2021
Manuel Sam Ribeiro, Aciel Eshky, Korin Richmond, Steve Renals

Figure 1 for Silent versus modal multi-speaker speech recognition from ultrasound and video
Figure 2 for Silent versus modal multi-speaker speech recognition from ultrasound and video
Figure 3 for Silent versus modal multi-speaker speech recognition from ultrasound and video
Figure 4 for Silent versus modal multi-speaker speech recognition from ultrasound and video
Viaarxiv icon

Exploiting ultrasound tongue imaging for the automatic detection of speech articulation errors

Feb 27, 2021
Manuel Sam Ribeiro, Joanne Cleland, Aciel Eshky, Korin Richmond, Steve Renals

Figure 1 for Exploiting ultrasound tongue imaging for the automatic detection of speech articulation errors
Figure 2 for Exploiting ultrasound tongue imaging for the automatic detection of speech articulation errors
Figure 3 for Exploiting ultrasound tongue imaging for the automatic detection of speech articulation errors
Figure 4 for Exploiting ultrasound tongue imaging for the automatic detection of speech articulation errors
Viaarxiv icon

TaL: a synchronised multi-speaker corpus of ultrasound tongue imaging, audio, and lip videos

Nov 19, 2020
Manuel Sam Ribeiro, Jennifer Sanger, Jing-Xuan Zhang, Aciel Eshky, Alan Wrench, Korin Richmond, Steve Renals

Figure 1 for TaL: a synchronised multi-speaker corpus of ultrasound tongue imaging, audio, and lip videos
Figure 2 for TaL: a synchronised multi-speaker corpus of ultrasound tongue imaging, audio, and lip videos
Figure 3 for TaL: a synchronised multi-speaker corpus of ultrasound tongue imaging, audio, and lip videos
Figure 4 for TaL: a synchronised multi-speaker corpus of ultrasound tongue imaging, audio, and lip videos
Viaarxiv icon

Ultrasound tongue imaging for diarization and alignment of child speech therapy sessions

Aug 15, 2019
Manuel Sam Ribeiro, Aciel Eshky, Korin Richmond, Steve Renals

Figure 1 for Ultrasound tongue imaging for diarization and alignment of child speech therapy sessions
Figure 2 for Ultrasound tongue imaging for diarization and alignment of child speech therapy sessions
Figure 3 for Ultrasound tongue imaging for diarization and alignment of child speech therapy sessions
Figure 4 for Ultrasound tongue imaging for diarization and alignment of child speech therapy sessions
Viaarxiv icon

UltraSuite: A Repository of Ultrasound and Acoustic Data from Child Speech Therapy Sessions

Jul 01, 2019
Aciel Eshky, Manuel Sam Ribeiro, Joanne Cleland, Korin Richmond, Zoe Roxburgh, James Scobbie, Alan Wrench

Figure 1 for UltraSuite: A Repository of Ultrasound and Acoustic Data from Child Speech Therapy Sessions
Figure 2 for UltraSuite: A Repository of Ultrasound and Acoustic Data from Child Speech Therapy Sessions
Figure 3 for UltraSuite: A Repository of Ultrasound and Acoustic Data from Child Speech Therapy Sessions
Figure 4 for UltraSuite: A Repository of Ultrasound and Acoustic Data from Child Speech Therapy Sessions
Viaarxiv icon

Synchronising audio and ultrasound by learning cross-modal embeddings

Jul 01, 2019
Aciel Eshky, Manuel Sam Ribeiro, Korin Richmond, Steve Renals

Figure 1 for Synchronising audio and ultrasound by learning cross-modal embeddings
Figure 2 for Synchronising audio and ultrasound by learning cross-modal embeddings
Figure 3 for Synchronising audio and ultrasound by learning cross-modal embeddings
Figure 4 for Synchronising audio and ultrasound by learning cross-modal embeddings
Viaarxiv icon

Speaker-independent classification of phonetic segments from raw ultrasound in child speech

Jul 01, 2019
Manuel Sam Ribeiro, Aciel Eshky, Korin Richmond, Steve Renals

Figure 1 for Speaker-independent classification of phonetic segments from raw ultrasound in child speech
Figure 2 for Speaker-independent classification of phonetic segments from raw ultrasound in child speech
Figure 3 for Speaker-independent classification of phonetic segments from raw ultrasound in child speech
Figure 4 for Speaker-independent classification of phonetic segments from raw ultrasound in child speech
Viaarxiv icon