Picture for Yuan Zong

Yuan Zong

the Key Laboratory of Child Development and Learning Science of Ministry of Education, and the Department of Information Science and Engineering, Southeast University, China

EALD-MLLM: Emotion Analysis in Long-sequential and De-identity videos with Multi-modal Large Language Model

Add code
May 01, 2024
Viaarxiv icon

PAVITS: Exploring Prosody-aware VITS for End-to-End Emotional Voice Conversion

Add code
Mar 03, 2024
Figure 1 for PAVITS: Exploring Prosody-aware VITS for End-to-End Emotional Voice Conversion
Figure 2 for PAVITS: Exploring Prosody-aware VITS for End-to-End Emotional Voice Conversion
Figure 3 for PAVITS: Exploring Prosody-aware VITS for End-to-End Emotional Voice Conversion
Figure 4 for PAVITS: Exploring Prosody-aware VITS for End-to-End Emotional Voice Conversion
Viaarxiv icon

Emotion-Aware Contrastive Adaptation Network for Source-Free Cross-Corpus Speech Emotion Recognition

Add code
Jan 23, 2024
Viaarxiv icon

Speech Swin-Transformer: Exploring a Hierarchical Transformer with Shifted Windows for Speech Emotion Recognition

Add code
Jan 19, 2024
Figure 1 for Speech Swin-Transformer: Exploring a Hierarchical Transformer with Shifted Windows for Speech Emotion Recognition
Figure 2 for Speech Swin-Transformer: Exploring a Hierarchical Transformer with Shifted Windows for Speech Emotion Recognition
Figure 3 for Speech Swin-Transformer: Exploring a Hierarchical Transformer with Shifted Windows for Speech Emotion Recognition
Figure 4 for Speech Swin-Transformer: Exploring a Hierarchical Transformer with Shifted Windows for Speech Emotion Recognition
Viaarxiv icon

Improving Speaker-independent Speech Emotion Recognition Using Dynamic Joint Distribution Adaptation

Add code
Jan 18, 2024
Figure 1 for Improving Speaker-independent Speech Emotion Recognition Using Dynamic Joint Distribution Adaptation
Figure 2 for Improving Speaker-independent Speech Emotion Recognition Using Dynamic Joint Distribution Adaptation
Figure 3 for Improving Speaker-independent Speech Emotion Recognition Using Dynamic Joint Distribution Adaptation
Figure 4 for Improving Speaker-independent Speech Emotion Recognition Using Dynamic Joint Distribution Adaptation
Viaarxiv icon

Towards Domain-Specific Cross-Corpus Speech Emotion Recognition Approach

Add code
Dec 11, 2023
Figure 1 for Towards Domain-Specific Cross-Corpus Speech Emotion Recognition Approach
Figure 2 for Towards Domain-Specific Cross-Corpus Speech Emotion Recognition Approach
Figure 3 for Towards Domain-Specific Cross-Corpus Speech Emotion Recognition Approach
Figure 4 for Towards Domain-Specific Cross-Corpus Speech Emotion Recognition Approach
Viaarxiv icon

PainSeeker: An Automated Method for Assessing Pain in Rats Through Facial Expressions

Add code
Nov 06, 2023
Viaarxiv icon

Learning to Rank Onset-Occurring-Offset Representations for Micro-Expression Recognition

Add code
Oct 07, 2023
Figure 1 for Learning to Rank Onset-Occurring-Offset Representations for Micro-Expression Recognition
Figure 2 for Learning to Rank Onset-Occurring-Offset Representations for Micro-Expression Recognition
Figure 3 for Learning to Rank Onset-Occurring-Offset Representations for Micro-Expression Recognition
Figure 4 for Learning to Rank Onset-Occurring-Offset Representations for Micro-Expression Recognition
Viaarxiv icon

Layer-Adapted Implicit Distribution Alignment Networks for Cross-Corpus Speech Emotion Recognition

Add code
Oct 06, 2023
Figure 1 for Layer-Adapted Implicit Distribution Alignment Networks for Cross-Corpus Speech Emotion Recognition
Figure 2 for Layer-Adapted Implicit Distribution Alignment Networks for Cross-Corpus Speech Emotion Recognition
Figure 3 for Layer-Adapted Implicit Distribution Alignment Networks for Cross-Corpus Speech Emotion Recognition
Figure 4 for Layer-Adapted Implicit Distribution Alignment Networks for Cross-Corpus Speech Emotion Recognition
Viaarxiv icon

Time-Frequency Transformer: A Novel Time Frequency Joint Learning Method for Speech Emotion Recognition

Add code
Aug 28, 2023
Figure 1 for Time-Frequency Transformer: A Novel Time Frequency Joint Learning Method for Speech Emotion Recognition
Figure 2 for Time-Frequency Transformer: A Novel Time Frequency Joint Learning Method for Speech Emotion Recognition
Figure 3 for Time-Frequency Transformer: A Novel Time Frequency Joint Learning Method for Speech Emotion Recognition
Figure 4 for Time-Frequency Transformer: A Novel Time Frequency Joint Learning Method for Speech Emotion Recognition
Viaarxiv icon