Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"Sentiment Analysis": models, code, and papers

Benchmarking Multimodal Sentiment Analysis

Jul 29, 2017
Erik Cambria, Devamanyu Hazarika, Soujanya Poria, Amir Hussain, R. B. V. Subramaanyam

We propose a framework for multimodal sentiment analysis and emotion recognition using convolutional neural network-based feature extraction from text and visual modalities. We obtain a performance improvement of 10% over the state of the art by combining visual, text and audio features. We also discuss some major issues frequently ignored in multimodal sentiment analysis research: the role of speaker-independent models, importance of the modalities and generalizability. The paper thus serve as a new benchmark for further research in multimodal sentiment analysis and also demonstrates the different facets of analysis to be considered while performing such tasks.

* Accepted in CICLing 2017 
  
Access Paper or Ask Questions

Multi-task Learning for Multi-modal Emotion Recognition and Sentiment Analysis

May 14, 2019
Md Shad Akhtar, Dushyant Singh Chauhan, Deepanway Ghosal, Soujanya Poria, Asif Ekbal, Pushpak Bhattacharyya

Related tasks often have inter-dependence on each other and perform better when solved in a joint framework. In this paper, we present a deep multi-task learning framework that jointly performs sentiment and emotion analysis both. The multi-modal inputs (i.e., text, acoustic and visual frames) of a video convey diverse and distinctive information, and usually do not have equal contribution in the decision making. We propose a context-level inter-modal attention framework for simultaneously predicting the sentiment and expressed emotions of an utterance. We evaluate our proposed approach on CMU-MOSEI dataset for multi-modal sentiment and emotion analysis. Evaluation results suggest that multi-task learning framework offers improvement over the single-task framework. The proposed approach reports new state-of-the-art performance for both sentiment analysis and emotion analysis.

* Accepted for publication in NAACL:HLT-2019 
  
Access Paper or Ask Questions

Legal Sentiment Analysis and Opinion Mining (LSAOM): Assimilating Advances in Autonomous AI Legal Reasoning

Oct 02, 2020
Lance Eliot

An expanding field of substantive interest for the theory of the law and the practice-of-law entails Legal Sentiment Analysis and Opinion Mining (LSAOM), consisting of two often intertwined phenomena and actions underlying legal discussions and narratives: (1) Sentiment Analysis (SA) for the detection of expressed or implied sentiment about a legal matter within the context of a legal milieu, and (2) Opinion Mining (OM) for the identification and illumination of explicit or implicit opinion accompaniments immersed within legal discourse. Efforts to undertake LSAOM have historically been performed by human hand and cognition, and only thinly aided in more recent times by the use of computer-based approaches. Advances in Artificial Intelligence (AI) involving especially Natural Language Processing (NLP) and Machine Learning (ML) are increasingly bolstering how automation can systematically perform either or both of Sentiment Analysis and Opinion Mining, all of which is being inexorably carried over into engagement within a legal context for improving LSAOM capabilities. This research paper examines the evolving infusion of AI into Legal Sentiment Analysis and Opinion Mining and proposes an alignment with the Levels of Autonomy (LoA) of AI Legal Reasoning (AILR), plus provides additional insights regarding AI LSAOM in its mechanizations and potential impact to the study of law and the practicing of law.

* 26 pages, 8 figures. arXiv admin note: text overlap with arXiv:2009.14620 
  
Access Paper or Ask Questions

A new ANEW: Evaluation of a word list for sentiment analysis in microblogs

Mar 15, 2011
Finn Årup Nielsen

Sentiment analysis of microblogs such as Twitter has recently gained a fair amount of attention. One of the simplest sentiment analysis approaches compares the words of a posting against a labeled word list, where each word has been scored for valence, -- a 'sentiment lexicon' or 'affective word lists'. There exist several affective word lists, e.g., ANEW (Affective Norms for English Words) developed before the advent of microblogging and sentiment analysis. I wanted to examine how well ANEW and other word lists performs for the detection of sentiment strength in microblog posts in comparison with a new word list specifically constructed for microblogs. I used manually labeled postings from Twitter scored for sentiment. Using a simple word matching I show that the new word list may perform better than ANEW, though not as good as the more elaborate approach found in SentiStrength.

* Proceedings of the ESWC2011 Workshop on 'Making Sense of Microposts': Big things come in small packages (2011) 93-98 
* 6 pages, 4 figures, 1 table, Submitted to "Making Sense of Microposts (#MSM2011)" 
  
Access Paper or Ask Questions

Sentiment Analysis on Customer Responses

Jul 05, 2020
Antony Samuels, John Mcgonical

Sentiment analysis is one of the fastest spreading research areas in computer science, making it challenging to keep track of all the activities in the area. We present a customer feedback reviews on product, where we utilize opinion mining, text mining and sentiments, which has affected the surrounded world by changing their opinion on a specific product. Data used in this study are online product reviews collected from Amazon.com. We performed a comparative sentiment analysis of retrieved reviews. This research paper provides you with sentimental analysis of various smart phone opinions on smart phones dividing them Positive, Negative and Neutral Behaviour.

  
Access Paper or Ask Questions

A Sentiment Analysis Dataset for Code-Mixed Malayalam-English

May 30, 2020
Bharathi Raja Chakravarthi, Navya Jose, Shardul Suryawanshi, Elizabeth Sherly, John P. McCrae

There is an increasing demand for sentiment analysis of text from social media which are mostly code-mixed. Systems trained on monolingual data fail for code-mixed data due to the complexity of mixing at different levels of the text. However, very few resources are available for code-mixed data to create models specific for this data. Although much research in multilingual and cross-lingual sentiment analysis has used semi-supervised or unsupervised methods, supervised methods still performs better. Only a few datasets for popular languages such as English-Spanish, English-Hindi, and English-Chinese are available. There are no resources available for Malayalam-English code-mixed data. This paper presents a new gold standard corpus for sentiment analysis of code-mixed text in Malayalam-English annotated by voluntary annotators. This gold standard corpus obtained a Krippendorff's alpha above 0.8 for the dataset. We use this new corpus to provide the benchmark for sentiment analysis in Malayalam-English code-mixed texts.

  
Access Paper or Ask Questions

Knowing What, How and Why: A Near Complete Solution for Aspect-based Sentiment Analysis

Nov 21, 2019
Haiyun Peng, Lu Xu, Lidong Bing, Fei Huang, Wei Lu, Luo Si

Target-based sentiment analysis or aspect-based sentiment analysis (ABSA) refers to addressing various sentiment analysis tasks at a fine-grained level, which includes but is not limited to aspect extraction, aspect sentiment classification, and opinion extraction. There exist many solvers of the above individual subtasks or a combination of two subtasks, and they can work together to tell a complete story, i.e. the discussed aspect, the sentiment on it, and the cause of the sentiment. However, no previous ABSA research tried to provide a complete solution in one shot. In this paper, we introduce a new subtask under ABSA, named aspect sentiment triplet extraction (ASTE). Particularly, a solver of this task needs to extract triplets (What, How, Why) from the inputs, which show WHAT the targeted aspects are, HOW their sentiment polarities are and WHY they have such polarities (i.e. opinion reasons). For instance, one triplet from "Waiters are very friendly and the pasta is simply average" could be ('Waiters', positive, 'friendly'). We propose a two-stage framework to address this task. The first stage predicts what, how and why in a unified model, and then the second stage pairs up the predicted what (how) and why from the first stage to output triplets. In the experiments, our framework has set a benchmark performance in this novel triplet extraction task. Meanwhile, it outperforms a few strong baselines adapted from state-of-the-art related methods.

* This paper is accepted in AAAI 2020 
  
Access Paper or Ask Questions

Multimodal Sentiment Analysis: Addressing Key Issues and Setting up Baselines

Mar 19, 2018
Soujanya Poria, Navonil Majumder, Devamanyu Hazarika, Erik Cambria, Amir Hussain, Alexander Gelbukh

Sentiment analysis is proven to be very useful tool in many applications regarding social media. This has led to a great surge of research in this field. Hence, in this paper, we compile the baselines for such research. In this paper, we explore three different deep-learning based architectures for multimodal sentiment classification, each improving upon the previous. Further, we evaluate these architectures with multiple datasets with fixed train/test partition. We also discuss some major issues, frequently ignored in multimodal sentiment analysis research, e.g., role of speaker-exclusive models, importance of different modalities, and generalizability. This framework illustrates the different facets of analysis to be considered while performing multimodal sentiment analysis and, hence, serves as a new benchmark for future research in this emerging field. We draw a comparison among the methods using empirical data, obtained from the experiments. In the future, we plan to focus on extracting semantics from visual features, cross-modal features and fusion.

* Cognitive Computation. arXiv admin note: substantial text overlap with arXiv:1707.09538 
  
Access Paper or Ask Questions

Real-Time Prediction of BITCOIN Price using Machine Learning Techniques and Public Sentiment Analysis

Jun 18, 2020
S M Raju, Ali Mohammad Tarif

Bitcoin is the first digital decentralized cryptocurrency that has shown a significant increase in market capitalization in recent years. The objective of this paper is to determine the predictable price direction of Bitcoin in USD by machine learning techniques and sentiment analysis. Twitter and Reddit have attracted a great deal of attention from researchers to study public sentiment. We have applied sentiment analysis and supervised machine learning principles to the extracted tweets from Twitter and Reddit posts, and we analyze the correlation between bitcoin price movements and sentiments in tweets. We explored several algorithms of machine learning using supervised learning to develop a prediction model and provide informative analysis of future market prices. Due to the difficulty of evaluating the exact nature of a Time Series(ARIMA) model, it is often very difficult to produce appropriate forecasts. Then we continue to implement Recurrent Neural Networks (RNN) with long short-term memory cells (LSTM). Thus, we analyzed the time series model prediction of bitcoin prices with greater efficiency using long short-term memory (LSTM) techniques and compared the predictability of bitcoin price and sentiment analysis of bitcoin tweets to the standard method (ARIMA). The RMSE (Root-mean-square error) of LSTM are 198.448 (single feature) and 197.515 (multi-feature) whereas the ARIMA model RMSE is 209.263 which shows that LSTM with multi feature shows the more accurate result.

* 14 pages, 8 figures, 2 tables 
  
Access Paper or Ask Questions

Sentiment Word Aware Multimodal Refinement for Multimodal Sentiment Analysis with ASR Errors

Mar 01, 2022
Yang Wu, Yanyan Zhao, Hao Yang, Song Chen, Bing Qin, Xiaohuan Cao, Wenting Zhao

Multimodal sentiment analysis has attracted increasing attention and lots of models have been proposed. However, the performance of the state-of-the-art models decreases sharply when they are deployed in the real world. We find that the main reason is that real-world applications can only access the text outputs by the automatic speech recognition (ASR) models, which may be with errors because of the limitation of model capacity. Through further analysis of the ASR outputs, we find that in some cases the sentiment words, the key sentiment elements in the textual modality, are recognized as other words, which makes the sentiment of the text change and hurts the performance of multimodal sentiment models directly. To address this problem, we propose the sentiment word aware multimodal refinement model (SWRM), which can dynamically refine the erroneous sentiment words by leveraging multimodal sentiment clues. Specifically, we first use the sentiment word position detection module to obtain the most possible position of the sentiment word in the text and then utilize the multimodal sentiment word refinement module to dynamically refine the sentiment word embeddings. The refined embeddings are taken as the textual inputs of the multimodal feature fusion module to predict the sentiment labels. We conduct extensive experiments on the real-world datasets including MOSI-Speechbrain, MOSI-IBM, and MOSI-iFlytek and the results demonstrate the effectiveness of our model, which surpasses the current state-of-the-art models on three datasets. Furthermore, our approach can be adapted for other multimodal feature fusion models easily. Data and code are available at https://github.com/albertwy/SWRM.

* Findings of ACL 2022 
  
Access Paper or Ask Questions
<<
3
4
5
6
7
8
9
10
11
12
13
14
15
>>