Alert button
Picture for Zaijing Li

Zaijing Li

Alert button

UniSA: Unified Generative Framework for Sentiment Analysis

Sep 04, 2023
Zaijing Li, Ting-En Lin, Yuchuan Wu, Meng Liu, Fengxiao Tang, Ming Zhao, Yongbin Li

Figure 1 for UniSA: Unified Generative Framework for Sentiment Analysis
Figure 2 for UniSA: Unified Generative Framework for Sentiment Analysis
Figure 3 for UniSA: Unified Generative Framework for Sentiment Analysis
Figure 4 for UniSA: Unified Generative Framework for Sentiment Analysis

Sentiment analysis is a crucial task that aims to understand people's emotional states and predict emotional categories based on multimodal information. It consists of several subtasks, such as emotion recognition in conversation (ERC), aspect-based sentiment analysis (ABSA), and multimodal sentiment analysis (MSA). However, unifying all subtasks in sentiment analysis presents numerous challenges, including modality alignment, unified input/output forms, and dataset bias. To address these challenges, we propose a Task-Specific Prompt method to jointly model subtasks and introduce a multimodal generative framework called UniSA. Additionally, we organize the benchmark datasets of main subtasks into a new Sentiment Analysis Evaluation benchmark, SAEval. We design novel pre-training tasks and training methods to enable the model to learn generic sentiment knowledge among subtasks to improve the model's multimodal sentiment perception ability. Our experimental results show that UniSA performs comparably to the state-of-the-art on all subtasks and generalizes well to various subtasks in sentiment analysis.

* Accepted to ACM MM 2023 
Viaarxiv icon

EmoCaps: Emotion Capsule based Model for Conversational Emotion Recognition

Mar 25, 2022
Zaijing Li, Fengxiao Tang, Ming Zhao, Yusen Zhu

Figure 1 for EmoCaps: Emotion Capsule based Model for Conversational Emotion Recognition
Figure 2 for EmoCaps: Emotion Capsule based Model for Conversational Emotion Recognition
Figure 3 for EmoCaps: Emotion Capsule based Model for Conversational Emotion Recognition
Figure 4 for EmoCaps: Emotion Capsule based Model for Conversational Emotion Recognition

Emotion recognition in conversation (ERC) aims to analyze the speaker's state and identify their emotion in the conversation. Recent works in ERC focus on context modeling but ignore the representation of contextual emotional tendency. In order to extract multi-modal information and the emotional tendency of the utterance effectively, we propose a new structure named Emoformer to extract multi-modal emotion vectors from different modalities and fuse them with sentence vector to be an emotion capsule. Furthermore, we design an end-to-end ERC model called EmoCaps, which extracts emotion vectors through the Emoformer structure and obtain the emotion classification results from a context analysis model. Through the experiments with two benchmark datasets, our model shows better performance than the existing state-of-the-art models.

* 9 pages, 5 figures, accepted by Finding of ACL 2022 
Viaarxiv icon

SEOVER: Sentence-level Emotion Orientation Vector based Conversation Emotion Recognition Model

Jun 16, 2021
Zaijing Li, Fengxiao Tang, Tieyu Sun, Yusen Zhu, Ming Zhao

Figure 1 for SEOVER: Sentence-level Emotion Orientation Vector based Conversation Emotion Recognition Model
Figure 2 for SEOVER: Sentence-level Emotion Orientation Vector based Conversation Emotion Recognition Model
Figure 3 for SEOVER: Sentence-level Emotion Orientation Vector based Conversation Emotion Recognition Model
Figure 4 for SEOVER: Sentence-level Emotion Orientation Vector based Conversation Emotion Recognition Model

For the task of conversation emotion recognition, recent works focus on speaker relationship modeling but ignore the role of utterance's emotional tendency.In this paper, we propose a new expression paradigm of sentence-level emotion orientation vector to model the potential correlation of emotions between sentence vectors. Based on it, we design an emotion recognition model, which extracts the sentence-level emotion orientation vectors from the language model and jointly learns from the dialogue sentiment analysis model and extracted sentence-level emotion orientation vectors to identify the speaker's emotional orientation during the conversation. We conduct experiments on two benchmark datasets and compare them with the five baseline models.The experimental results show that our model has better performance on all data sets.

Viaarxiv icon