Multimodal Emotion Recognition


Multimodal emotion recognition is the process of recognizing emotions from multiple modalities, such as speech, text, and facial expressions.

BeMERC: Behavior-Aware MLLM-based Framework for Multimodal Emotion Recognition in Conversation

Add code
Mar 31, 2025
Viaarxiv icon

A Human Digital Twin Architecture for Knowledge-based Interactions and Context-Aware Conversations

Add code
Apr 04, 2025
Viaarxiv icon

GatedxLSTM: A Multimodal Affective Computing Approach for Emotion Recognition in Conversations

Add code
Mar 26, 2025
Viaarxiv icon

OmniVox: Zero-Shot Emotion Recognition with Omni-LLMs

Add code
Mar 27, 2025
Viaarxiv icon

Handling Weak Complementary Relationships for Audio-Visual Emotion Recognition

Add code
Mar 15, 2025
Viaarxiv icon

Multimodal Emotion Recognition and Sentiment Analysis in Multi-Party Conversation Contexts

Add code
Mar 09, 2025
Viaarxiv icon

R1-Omni: Explainable Omni-Multimodal Emotion Recognition with Reinforcing Learning

Add code
Mar 07, 2025
Viaarxiv icon

Latent Distribution Decoupling: A Probabilistic Framework for Uncertainty-Aware Multimodal Emotion Recognition

Add code
Feb 19, 2025
Viaarxiv icon

A Novel Approach to for Multimodal Emotion Recognition : Multimodal semantic information fusion

Add code
Feb 12, 2025
Viaarxiv icon

MSE-Adapter: A Lightweight Plugin Endowing LLMs with the Capability to Perform Multimodal Sentiment Analysis and Emotion Recognition

Add code
Feb 18, 2025
Viaarxiv icon