Picture for Chi-Chun Lee

Chi-Chun Lee

RE-LLM: Refining Empathetic Speech-LLM Responses by Integrating Emotion Nuance

Add code
Feb 11, 2026
Viaarxiv icon

ASR for Affective Speech: Investigating Impact of Emotion and Speech Generative Strategy

Add code
Jan 28, 2026
Viaarxiv icon

Joint Learning using Mixture-of-Expert-Based Representation for Enhanced Speech Generation and Robust Emotion Recognition

Add code
Sep 10, 2025
Viaarxiv icon

Lessons Learnt: Revisit Key Training Strategies for Effective Speech Emotion Recognition in the Wild

Add code
Aug 10, 2025
Viaarxiv icon

Is It Still Fair? Investigating Gender Fairness in Cross-Corpus Speech Emotion Recognition

Add code
Jan 02, 2025
Figure 1 for Is It Still Fair? Investigating Gender Fairness in Cross-Corpus Speech Emotion Recognition
Figure 2 for Is It Still Fair? Investigating Gender Fairness in Cross-Corpus Speech Emotion Recognition
Figure 3 for Is It Still Fair? Investigating Gender Fairness in Cross-Corpus Speech Emotion Recognition
Figure 4 for Is It Still Fair? Investigating Gender Fairness in Cross-Corpus Speech Emotion Recognition
Viaarxiv icon

Mouth Articulation-Based Anchoring for Improved Cross-Corpus Speech Emotion Recognition

Add code
Dec 27, 2024
Viaarxiv icon

Stimulus Modality Matters: Impact of Perceptual Evaluations from Different Modalities on Speech Emotion Recognition System Performance

Add code
Sep 16, 2024
Figure 1 for Stimulus Modality Matters: Impact of Perceptual Evaluations from Different Modalities on Speech Emotion Recognition System Performance
Figure 2 for Stimulus Modality Matters: Impact of Perceptual Evaluations from Different Modalities on Speech Emotion Recognition System Performance
Figure 3 for Stimulus Modality Matters: Impact of Perceptual Evaluations from Different Modalities on Speech Emotion Recognition System Performance
Figure 4 for Stimulus Modality Matters: Impact of Perceptual Evaluations from Different Modalities on Speech Emotion Recognition System Performance
Viaarxiv icon

DiffEVC: Any-to-Any Emotion Voice Conversion with Expressive Guidance

Add code
Sep 05, 2024
Figure 1 for DiffEVC: Any-to-Any Emotion Voice Conversion with Expressive Guidance
Figure 2 for DiffEVC: Any-to-Any Emotion Voice Conversion with Expressive Guidance
Figure 3 for DiffEVC: Any-to-Any Emotion Voice Conversion with Expressive Guidance
Figure 4 for DiffEVC: Any-to-Any Emotion Voice Conversion with Expressive Guidance
Viaarxiv icon

EMO-Codec: An In-Depth Look at Emotion Preservation capacity of Legacy and Neural Codec Models With Subjective and Objective Evaluations

Add code
Jul 30, 2024
Figure 1 for EMO-Codec: An In-Depth Look at Emotion Preservation capacity of Legacy and Neural Codec Models With Subjective and Objective Evaluations
Figure 2 for EMO-Codec: An In-Depth Look at Emotion Preservation capacity of Legacy and Neural Codec Models With Subjective and Objective Evaluations
Figure 3 for EMO-Codec: An In-Depth Look at Emotion Preservation capacity of Legacy and Neural Codec Models With Subjective and Objective Evaluations
Figure 4 for EMO-Codec: An In-Depth Look at Emotion Preservation capacity of Legacy and Neural Codec Models With Subjective and Objective Evaluations
Viaarxiv icon

EMO-Codec: A Depth Look at Emotion Preservation Capacity of Legacy and Neural Codec Models With Subjective and Objective Evaluations

Add code
Jul 24, 2024
Figure 1 for EMO-Codec: A Depth Look at Emotion Preservation Capacity of Legacy and Neural Codec Models With Subjective and Objective Evaluations
Figure 2 for EMO-Codec: A Depth Look at Emotion Preservation Capacity of Legacy and Neural Codec Models With Subjective and Objective Evaluations
Figure 3 for EMO-Codec: A Depth Look at Emotion Preservation Capacity of Legacy and Neural Codec Models With Subjective and Objective Evaluations
Figure 4 for EMO-Codec: A Depth Look at Emotion Preservation Capacity of Legacy and Neural Codec Models With Subjective and Objective Evaluations
Viaarxiv icon