Alert button

"speech": models, code, and papers
Alert button

Expressive paragraph text-to-speech synthesis with multi-step variational autoencoder

Add code
Bookmark button
Alert button
Aug 29, 2023
Xuyuan Li, Zengqiang Shang, Jian Liu, Hua Hua, Peiyang Shi, Pengyuan Zhang

Figure 1 for Expressive paragraph text-to-speech synthesis with multi-step variational autoencoder
Figure 2 for Expressive paragraph text-to-speech synthesis with multi-step variational autoencoder
Figure 3 for Expressive paragraph text-to-speech synthesis with multi-step variational autoencoder
Figure 4 for Expressive paragraph text-to-speech synthesis with multi-step variational autoencoder
Viaarxiv icon

Modelling prospective memory and resilient situated communications via Wizard of Oz

Nov 09, 2023
Yanzhe Li, Frank Broz, Mark Neerincx

Viaarxiv icon

Speech Emotion Recognition with Distilled Prosodic and Linguistic Affect Representations

Sep 09, 2023
Debaditya Shome, Ali Etemad

Figure 1 for Speech Emotion Recognition with Distilled Prosodic and Linguistic Affect Representations
Figure 2 for Speech Emotion Recognition with Distilled Prosodic and Linguistic Affect Representations
Figure 3 for Speech Emotion Recognition with Distilled Prosodic and Linguistic Affect Representations
Figure 4 for Speech Emotion Recognition with Distilled Prosodic and Linguistic Affect Representations
Viaarxiv icon

Brain-Driven Representation Learning Based on Diffusion Model

Nov 14, 2023
Soowon Kim, Seo-Hyun Lee, Young-Eun Lee, Ji-Won Lee, Ji-Ha Park, Seong-Whan Lee

Figure 1 for Brain-Driven Representation Learning Based on Diffusion Model
Figure 2 for Brain-Driven Representation Learning Based on Diffusion Model
Figure 3 for Brain-Driven Representation Learning Based on Diffusion Model
Viaarxiv icon

IruMozhi: Automatically classifying diglossia in Tamil

Nov 13, 2023
Kabilan Prasanna, Aryaman Arora

Viaarxiv icon

Rep2wav: Noise Robust text-to-speech Using self-supervised representations

Add code
Bookmark button
Alert button
Aug 28, 2023
Qiushi Zhu, Yu Gu, Chao Weng, Yuchen Hu, Lirong Dai, Jie Zhang

Figure 1 for Rep2wav: Noise Robust text-to-speech Using self-supervised representations
Figure 2 for Rep2wav: Noise Robust text-to-speech Using self-supervised representations
Figure 3 for Rep2wav: Noise Robust text-to-speech Using self-supervised representations
Figure 4 for Rep2wav: Noise Robust text-to-speech Using self-supervised representations
Viaarxiv icon

REDS: Resource-Efficient Deep Subnetworks for Dynamic Resource Constraints

Add code
Bookmark button
Alert button
Nov 22, 2023
Francesco Corti, Balz Maag, Joachim Schauer, Ulrich Pferschy, Olga Saukh

Viaarxiv icon

Overview of the HASOC Subtrack at FIRE 2023: Identification of Tokens Contributing to Explicit Hate in English by Span Detection

Nov 16, 2023
Sarah Masud, Mohammad Aflah Khan, Md. Shad Akhtar, Tanmoy Chakraborty

Viaarxiv icon

DurIAN-E: Duration Informed Attention Network For Expressive Text-to-Speech Synthesis

Sep 22, 2023
Yu Gu, Yianrao Bian, Guangzhi Lei, Chao Weng, Dan Su

Figure 1 for DurIAN-E: Duration Informed Attention Network For Expressive Text-to-Speech Synthesis
Figure 2 for DurIAN-E: Duration Informed Attention Network For Expressive Text-to-Speech Synthesis
Figure 3 for DurIAN-E: Duration Informed Attention Network For Expressive Text-to-Speech Synthesis
Figure 4 for DurIAN-E: Duration Informed Attention Network For Expressive Text-to-Speech Synthesis
Viaarxiv icon

EMOCONV-DIFF: Diffusion-based Speech Emotion Conversion for Non-parallel and In-the-wild Data

Sep 14, 2023
Navin Raj Prabhu, Bunlong Lay, Simon Welker, Nale Lehmann-Willenbrock, Timo Gerkmann

Figure 1 for EMOCONV-DIFF: Diffusion-based Speech Emotion Conversion for Non-parallel and In-the-wild Data
Figure 2 for EMOCONV-DIFF: Diffusion-based Speech Emotion Conversion for Non-parallel and In-the-wild Data
Figure 3 for EMOCONV-DIFF: Diffusion-based Speech Emotion Conversion for Non-parallel and In-the-wild Data
Figure 4 for EMOCONV-DIFF: Diffusion-based Speech Emotion Conversion for Non-parallel and In-the-wild Data
Viaarxiv icon