Alert button
Picture for Catherine Lai

Catherine Lai

Alert button

Layer-Wise Analysis of Self-Supervised Acoustic Word Embeddings: A Study on Speech Emotion Recognition

Add code
Bookmark button
Alert button
Feb 04, 2024
Alexandra Saliba, Yuanchao Li, Ramon Sanabria, Catherine Lai

Viaarxiv icon

Quantifying the perceptual value of lexical and non-lexical channels in speech

Add code
Bookmark button
Alert button
Jul 07, 2023
Sarenne Wallbridge, Peter Bell, Catherine Lai

Figure 1 for Quantifying the perceptual value of lexical and non-lexical channels in speech
Figure 2 for Quantifying the perceptual value of lexical and non-lexical channels in speech
Figure 3 for Quantifying the perceptual value of lexical and non-lexical channels in speech
Viaarxiv icon

Transfer Learning for Personality Perception via Speech Emotion Recognition

Add code
Bookmark button
Alert button
May 25, 2023
Yuanchao Li, Peter Bell, Catherine Lai

Figure 1 for Transfer Learning for Personality Perception via Speech Emotion Recognition
Figure 2 for Transfer Learning for Personality Perception via Speech Emotion Recognition
Figure 3 for Transfer Learning for Personality Perception via Speech Emotion Recognition
Viaarxiv icon

ASR and Emotional Speech: A Word-Level Investigation of the Mutual Impact of Speech and Emotion Recognition

Add code
Bookmark button
Alert button
May 25, 2023
Yuanchao Li, Zeyu Zhao, Ondrej Klejch, Peter Bell, Catherine Lai

Figure 1 for ASR and Emotional Speech: A Word-Level Investigation of the Mutual Impact of Speech and Emotion Recognition
Figure 2 for ASR and Emotional Speech: A Word-Level Investigation of the Mutual Impact of Speech and Emotion Recognition
Figure 3 for ASR and Emotional Speech: A Word-Level Investigation of the Mutual Impact of Speech and Emotion Recognition
Figure 4 for ASR and Emotional Speech: A Word-Level Investigation of the Mutual Impact of Speech and Emotion Recognition
Viaarxiv icon

Cross-Attention is Not Enough: Incongruity-Aware Multimodal Sentiment Analysis and Emotion Recognition

Add code
Bookmark button
Alert button
May 23, 2023
Yaoting Wang, Yuanchao Li, Peter Bell, Catherine Lai

Figure 1 for Cross-Attention is Not Enough: Incongruity-Aware Multimodal Sentiment Analysis and Emotion Recognition
Figure 2 for Cross-Attention is Not Enough: Incongruity-Aware Multimodal Sentiment Analysis and Emotion Recognition
Figure 3 for Cross-Attention is Not Enough: Incongruity-Aware Multimodal Sentiment Analysis and Emotion Recognition
Figure 4 for Cross-Attention is Not Enough: Incongruity-Aware Multimodal Sentiment Analysis and Emotion Recognition
Viaarxiv icon

I Know Your Feelings Before You Do: Predicting Future Affective Reactions in Human-Computer Dialogue

Add code
Bookmark button
Alert button
Mar 17, 2023
Yuanchao Li, Koji Inoue, Leimin Tian, Changzeng Fu, Carlos Ishi, Hiroshi Ishiguro, Tatsuya Kawahara, Catherine Lai

Figure 1 for I Know Your Feelings Before You Do: Predicting Future Affective Reactions in Human-Computer Dialogue
Figure 2 for I Know Your Feelings Before You Do: Predicting Future Affective Reactions in Human-Computer Dialogue
Figure 3 for I Know Your Feelings Before You Do: Predicting Future Affective Reactions in Human-Computer Dialogue
Figure 4 for I Know Your Feelings Before You Do: Predicting Future Affective Reactions in Human-Computer Dialogue
Viaarxiv icon

Alzheimer's Dementia Detection through Spontaneous Dialogue with Proactive Robotic Listeners

Add code
Bookmark button
Alert button
Nov 15, 2022
Yuanchao Li, Catherine Lai, Divesh Lala, Koji Inoue, Tatsuya Kawahara

Figure 1 for Alzheimer's Dementia Detection through Spontaneous Dialogue with Proactive Robotic Listeners
Figure 2 for Alzheimer's Dementia Detection through Spontaneous Dialogue with Proactive Robotic Listeners
Viaarxiv icon

Multimodal Dyadic Impression Recognition via Listener Adaptive Cross-Domain Fusion

Add code
Bookmark button
Alert button
Nov 09, 2022
Yuanchao Li, Peter Bell, Catherine Lai

Figure 1 for Multimodal Dyadic Impression Recognition via Listener Adaptive Cross-Domain Fusion
Figure 2 for Multimodal Dyadic Impression Recognition via Listener Adaptive Cross-Domain Fusion
Figure 3 for Multimodal Dyadic Impression Recognition via Listener Adaptive Cross-Domain Fusion
Figure 4 for Multimodal Dyadic Impression Recognition via Listener Adaptive Cross-Domain Fusion
Viaarxiv icon