Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Sentiment": models, code, and papers

SentiBERT: A Transferable Transformer-Based Architecture for Compositional Sentiment Semantics

May 08, 2020
Da Yin, Tao Meng, Kai-Wei Chang

We propose SentiBERT, a variant of BERT that effectively captures compositional sentiment semantics. The model incorporates contextualized representation with binary constituency parse tree to capture semantic composition. Comprehensive experiments demonstrate that SentiBERT achieves competitive performance on phrase-level sentiment classification. We further demonstrate that the sentiment composition learned from the phrase-level annotations on SST can be transferred to other sentiment analysis tasks as well as related tasks, such as emotion classification tasks. Moreover, we conduct ablation studies and design visualization methods to understand SentiBERT. We show that SentiBERT is better than baseline approaches in capturing negation and the contrastive relation and model the compositional sentiment semantics.

* ACL-2020 

  Access Paper or Ask Questions

Development of a General Purpose Sentiment Lexicon for Igbo Language

Apr 24, 2020
Emeka Ogbuju, Moses Onyesolu

There are publicly available general purpose sentiment lexicons in some high resource languages but very few exist in the low resource languages. This makes it difficult to directly perform sentiment analysis tasks in such languages. The objective of this work is to create a general purpose sentiment lexicon for the Igbo language that can determine the sentiment of documents written in the Igbo language without having to translate it to the English language. The material used was an automatically translated lexicon by Liu and the manual addition of Igbo native words. The result of this work is a general purpose lexicon called IgboSentilex. The performance was tested on the BBC Igbo news channel. It returned an average polarity agreement of 95.75 percent with other general purpose sentiment lexicons.

* Accepted and presented at the Widening Natural Language Processing (WiNLP) workshop, co-located with the Association for Computational Linguistics (ACL) conference 2019 in Florence, Italy. See https://www.winlp.org/wp-content/uploads/2019/final_papers/103_Paper.pdf 

  Access Paper or Ask Questions

Simple Text Mining for Sentiment Analysis of Political Figure Using Naive Bayes Classifier Method

Aug 21, 2015
Yustinus Eko Soelistio, Martinus Raditia Sigit Surendra

Text mining can be applied to many fields. One of the application is using text mining in digital newspaper to do politic sentiment analysis. In this paper sentiment analysis is applied to get information from digital news articles about its positive or negative sentiment regarding particular politician. This paper suggests a simple model to analyze digital newspaper sentiment polarity using naive Bayes classifier method. The model uses a set of initial data to begin with which will be updated when new information appears. The model showed promising result when tested and can be implemented to some other sentiment analysis problems.

* 5 pages, published in the Proceedings of the 7th ICTS 

  Access Paper or Ask Questions

Sentiment Analysis of Document Based on Annotation

Nov 07, 2011
Archana Shukla

I present a tool which tells the quality of document or its usefulness based on annotations. Annotation may include comments, notes, observation, highlights, underline, explanation, question or help etc. comments are used for evaluative purpose while others are used for summarization or for expansion also. Further these comments may be on another annotation. Such annotations are referred as meta-annotation. All annotation may not get equal weightage. My tool considered highlights, underline as well as comments to infer the collective sentiment of annotators. Collective sentiments of annotators are classified as positive, negative, objectivity. My tool computes collective sentiment of annotations in two manners. It counts all the annotation present on the documents as well as it also computes sentiment scores of all annotation which includes comments to obtain the collective sentiments about the document or to judge the quality of document. I demonstrate the use of tool on research paper.

* 14 pages, 14 figures, published in IJWEST Journal 

  Access Paper or Ask Questions

Visual Sentiment Prediction with Deep Convolutional Neural Networks

Nov 21, 2014
Can Xu, Suleyman Cetintas, Kuang-Chih Lee, Li-Jia Li

Images have become one of the most popular types of media through which users convey their emotions within online social networks. Although vast amount of research is devoted to sentiment analysis of textual data, there has been very limited work that focuses on analyzing sentiment of image data. In this work, we propose a novel visual sentiment prediction framework that performs image understanding with Deep Convolutional Neural Networks (CNN). Specifically, the proposed sentiment prediction framework performs transfer learning from a CNN with millions of parameters, which is pre-trained on large-scale data for object recognition. Experiments conducted on two real-world datasets from Twitter and Tumblr demonstrate the effectiveness of the proposed visual sentiment analysis framework.


  Access Paper or Ask Questions

A Novel Context-Aware Multimodal Framework for Persian Sentiment Analysis

Mar 03, 2021
Kia Dashtipour, Mandar Gogate, Erik Cambria, Amir Hussain

Most recent works on sentiment analysis have exploited the text modality. However, millions of hours of video recordings posted on social media platforms everyday hold vital unstructured information that can be exploited to more effectively gauge public perception. Multimodal sentiment analysis offers an innovative solution to computationally understand and harvest sentiments from videos by contextually exploiting audio, visual and textual cues. In this paper, we, firstly, present a first of its kind Persian multimodal dataset comprising more than 800 utterances, as a benchmark resource for researchers to evaluate multimodal sentiment analysis approaches in Persian language. Secondly, we present a novel context-aware multimodal sentiment analysis framework, that simultaneously exploits acoustic, visual and textual cues to more accurately determine the expressed sentiment. We employ both decision-level (late) and feature-level (early) fusion methods to integrate affective cross-modal information. Experimental results demonstrate that the contextual integration of multimodal features such as textual, acoustic and visual features deliver better performance (91.39%) compared to unimodal features (89.24%).

* Accepted in Neurocomputing 

  Access Paper or Ask Questions

Towards Target-dependent Sentiment Classification in News Articles

May 20, 2021
Felix Hamborg, Karsten Donnay, Bela Gipp

Extensive research on target-dependent sentiment classification (TSC) has led to strong classification performances in domains where authors tend to explicitly express sentiment about specific entities or topics, such as in reviews or on social media. We investigate TSC in news articles, a much less researched domain, despite the importance of news as an essential information source in individual and societal decision making. This article introduces NewsTSC, a manually annotated dataset to explore TSC on news articles. Investigating characteristics of sentiment in news and contrasting them to popular TSC domains, we find that sentiment in the news is expressed less explicitly, is more dependent on context and readership, and requires a greater degree of interpretation. In an extensive evaluation, we find that the state of the art in TSC performs worse on news articles than on other domains (average recall AvgRec = 69.8 on NewsTSC compared to AvgRev = [75.6, 82.2] on established TSC datasets). Reasons include incorrectly resolved relation of target and sentiment-bearing phrases and off-context dependence. As a major improvement over previous news TSC, we find that BERT's natural language understanding capabilities capture the less explicit sentiment used in news articles.


  Access Paper or Ask Questions

MSCTD: A Multimodal Sentiment Chat Translation Dataset

Feb 28, 2022
Yunlong Liang, Fandong Meng, Jinan Xu, Yufeng Chen, Jie Zhou

Multimodal machine translation and textual chat translation have received considerable attention in recent years. Although the conversation in its natural form is usually multimodal, there still lacks work on multimodal machine translation in conversations. In this work, we introduce a new task named Multimodal Chat Translation (MCT), aiming to generate more accurate translations with the help of the associated dialogue history and visual context. To this end, we firstly construct a Multimodal Sentiment Chat Translation Dataset (MSCTD) containing 142,871 English-Chinese utterance pairs in 14,762 bilingual dialogues and 30,370 English-German utterance pairs in 3,079 bilingual dialogues. Each utterance pair, corresponding to the visual context that reflects the current conversational scene, is annotated with a sentiment label. Then, we benchmark the task by establishing multiple baseline systems that incorporate multimodal and sentiment features for MCT. Preliminary experiments on four language directions (English-Chinese and English-German) verify the potential of contextual and multimodal information fusion and the positive impact of sentiment on the MCT task. Additionally, as a by-product of the MSCTD, it also provides two new benchmarks on multimodal dialogue sentiment analysis. Our work can facilitate research on both multimodal chat translation and multimodal dialogue sentiment analysis.

* Accepted at ACL 2022 as a long paper of main conference. Code and data: https://github.com/XL2248/MSCTD 

  Access Paper or Ask Questions

An AutoML-based Approach to Multimodal Image Sentiment Analysis

Feb 16, 2021
Vasco Lopes, António Gaspar, Luís A. Alexandre, João Cordeiro

Sentiment analysis is a research topic focused on analysing data to extract information related to the sentiment that it causes. Applications of sentiment analysis are wide, ranging from recommendation systems, and marketing to customer satisfaction. Recent approaches evaluate textual content using Machine Learning techniques that are trained over large corpora. However, as social media grown, other data types emerged in large quantities, such as images. Sentiment analysis in images has shown to be a valuable complement to textual data since it enables the inference of the underlying message polarity by creating context and connections. Multimodal sentiment analysis approaches intend to leverage information of both textual and image content to perform an evaluation. Despite recent advances, current solutions still flounder in combining both image and textual information to classify social media data, mainly due to subjectivity, inter-class homogeneity and fusion data differences. In this paper, we propose a method that combines both textual and image individual sentiment analysis into a final fused classification based on AutoML, that performs a random search to find the best model. Our method achieved state-of-the-art performance in the B-T4SA dataset, with 95.19% accuracy.


  Access Paper or Ask Questions

Hierarchical Attention Generative Adversarial Networks for Cross-domain Sentiment Classification

Mar 27, 2019
Yuebing Zhang, Duoqian Miao, Jiaqi Wang

Cross-domain sentiment classification (CDSC) is an importance task in domain adaptation and sentiment classification. Due to the domain discrepancy, a sentiment classifier trained on source domain data may not works well on target domain data. In recent years, many researchers have used deep neural network models for cross-domain sentiment classification task, many of which use Gradient Reversal Layer (GRL) to design an adversarial network structure to train a domain-shared sentiment classifier. Different from those methods, we proposed Hierarchical Attention Generative Adversarial Networks (HAGAN) which alternately trains a generator and a discriminator in order to produce a document representation which is sentiment-distinguishable but domain-indistinguishable. Besides, the HAGAN model applies Bidirectional Gated Recurrent Unit (Bi-GRU) to encode the contextual information of a word and a sentence into the document representation. In addition, the HAGAN model use hierarchical attention mechanism to optimize the document representation and automatically capture the pivots and non-pivots. The experiments on Amazon review dataset show the effectiveness of HAGAN.


  Access Paper or Ask Questions

<<
13
14
15
16
17
18
19
20
21
22
23
24
25
>>