Picture for Zheng Lian

Zheng Lian

Emotion and Intent Joint Understanding in Multimodal Conversation: A Benchmarking Dataset

Add code
Jul 03, 2024
Figure 1 for Emotion and Intent Joint Understanding in Multimodal Conversation: A Benchmarking Dataset
Figure 2 for Emotion and Intent Joint Understanding in Multimodal Conversation: A Benchmarking Dataset
Figure 3 for Emotion and Intent Joint Understanding in Multimodal Conversation: A Benchmarking Dataset
Figure 4 for Emotion and Intent Joint Understanding in Multimodal Conversation: A Benchmarking Dataset
Viaarxiv icon

Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning

Add code
Jun 17, 2024
Figure 1 for Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning
Figure 2 for Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning
Figure 3 for Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning
Figure 4 for Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning
Viaarxiv icon

MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition

Add code
Apr 29, 2024
Figure 1 for MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition
Figure 2 for MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition
Figure 3 for MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition
Figure 4 for MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition
Viaarxiv icon

Multimodal Fusion with Pre-Trained Model Features in Affective Behaviour Analysis In-the-wild

Add code
Mar 22, 2024
Figure 1 for Multimodal Fusion with Pre-Trained Model Features in Affective Behaviour Analysis In-the-wild
Figure 2 for Multimodal Fusion with Pre-Trained Model Features in Affective Behaviour Analysis In-the-wild
Figure 3 for Multimodal Fusion with Pre-Trained Model Features in Affective Behaviour Analysis In-the-wild
Viaarxiv icon

Can Deception Detection Go Deeper? Dataset, Evaluation, and Benchmark for Deception Reasoning

Add code
Feb 18, 2024
Viaarxiv icon

HiCMAE: Hierarchical Contrastive Masked Autoencoder for Self-Supervised Audio-Visual Emotion Recognition

Add code
Jan 11, 2024
Figure 1 for HiCMAE: Hierarchical Contrastive Masked Autoencoder for Self-Supervised Audio-Visual Emotion Recognition
Figure 2 for HiCMAE: Hierarchical Contrastive Masked Autoencoder for Self-Supervised Audio-Visual Emotion Recognition
Figure 3 for HiCMAE: Hierarchical Contrastive Masked Autoencoder for Self-Supervised Audio-Visual Emotion Recognition
Figure 4 for HiCMAE: Hierarchical Contrastive Masked Autoencoder for Self-Supervised Audio-Visual Emotion Recognition
Viaarxiv icon

SVFAP: Self-supervised Video Facial Affect Perceiver

Add code
Dec 31, 2023
Viaarxiv icon

GPT-4V with Emotion: A Zero-shot Benchmark for Multimodal Emotion Understanding

Add code
Dec 07, 2023
Viaarxiv icon

MAE-DFER: Efficient Masked Autoencoder for Self-supervised Dynamic Facial Expression Recognition

Add code
Jul 05, 2023
Figure 1 for MAE-DFER: Efficient Masked Autoencoder for Self-supervised Dynamic Facial Expression Recognition
Figure 2 for MAE-DFER: Efficient Masked Autoencoder for Self-supervised Dynamic Facial Expression Recognition
Figure 3 for MAE-DFER: Efficient Masked Autoencoder for Self-supervised Dynamic Facial Expression Recognition
Figure 4 for MAE-DFER: Efficient Masked Autoencoder for Self-supervised Dynamic Facial Expression Recognition
Viaarxiv icon

MFAS: Emotion Recognition through Multiple Perspectives Fusion Architecture Search Emulating Human Cognition

Add code
Jun 12, 2023
Figure 1 for MFAS: Emotion Recognition through Multiple Perspectives Fusion Architecture Search Emulating Human Cognition
Figure 2 for MFAS: Emotion Recognition through Multiple Perspectives Fusion Architecture Search Emulating Human Cognition
Figure 3 for MFAS: Emotion Recognition through Multiple Perspectives Fusion Architecture Search Emulating Human Cognition
Figure 4 for MFAS: Emotion Recognition through Multiple Perspectives Fusion Architecture Search Emulating Human Cognition
Viaarxiv icon