Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Sentiment": models, code, and papers

Advancing Humor-Focused Sentiment Analysis through Improved Contextualized Embeddings and Model Architecture

Nov 23, 2020
Felipe Godoy

Humor is a natural and fundamental component of human interactions. When correctly applied, humor allows us to express thoughts and feelings conveniently and effectively, increasing interpersonal affection, likeability, and trust. However, understanding the use of humor is a computationally challenging task from the perspective of humor-aware language processing models. As language models become ubiquitous through virtual-assistants and IOT devices, the need to develop humor-aware models rises exponentially. To further improve the state-of-the-art capacity to perform this particular sentiment-analysis task we must explore models that incorporate contextualized and nonverbal elements in their design. Ideally, we seek architectures accepting non-verbal elements as additional embedded inputs to the model, alongside the original sentence-embedded input. This survey thus analyses the current state of research in techniques for improved contextualized embedding incorporating nonverbal information, as well as newly proposed deep architectures to improve context retention on top of popular word-embeddings methods.


  Access Paper or Ask Questions

Classifying Tweet Sentiment Using the Hidden State and Attention Matrix of a Fine-tuned BERTweet Model

Sep 29, 2021
Tommaso Macrì, Freya Murphy, Yunfan Zou, Yves Zumbach

This paper introduces a study on tweet sentiment classification. Our task is to classify a tweet as either positive or negative. We approach the problem in two steps, namely embedding and classifying. Our baseline methods include several combinations of traditional embedding methods and classification algorithms. Furthermore, we explore the current state-of-the-art tweet analysis model, BERTweet, and propose a novel approach in which features are engineered from the hidden states and attention matrices of the model, inspired by empirical study of the tweets. Using a multi-layer perceptron trained with a high dropout rate for classification, our proposed approach achieves a validation accuracy of 0.9111.


  Access Paper or Ask Questions

NILC-USP at SemEval-2017 Task 4: A Multi-view Ensemble for Twitter Sentiment Analysis

Apr 07, 2017
Edilson A. Corrêa Jr., Vanessa Queiroz Marinho, Leandro Borges dos Santos

This paper describes our multi-view ensemble approach to SemEval-2017 Task 4 on Sentiment Analysis in Twitter, specifically, the Message Polarity Classification subtask for English (subtask A). Our system is a voting ensemble, where each base classifier is trained in a different feature space. The first space is a bag-of-words model and has a Linear SVM as base classifier. The second and third spaces are two different strategies of combining word embeddings to represent sentences and use a Linear SVM and a Logistic Regressor as base classifiers. The proposed system was ranked 18th out of 38 systems considering F1 score and 20th considering recall.

* Published in Proceedings of SemEval-2017, 5 pages 

  Access Paper or Ask Questions

[email protected]_CodeMixed-2017: Sentiment Analysis for Indian Code Mixed Social Media Texts

Feb 15, 2018
Kamal Sarkar

This paper reports about our work in the NLP Tool Contest @ICON-2017, shared task on Sentiment Analysis for Indian Languages (SAIL) (code mixed). To implement our system, we have used a machine learning algo-rithm called Multinomial Na\"ive Bayes trained using n-gram and SentiWordnet features. We have also used a small SentiWordnet for English and a small SentiWordnet for Bengali. But we have not used any SentiWordnet for Hindi language. We have tested our system on Hindi-English and Bengali-English code mixed social media data sets released for the contest. The performance of our system is very close to the best system participated in the contest. For both Bengali-English and Hindi-English runs, our system was ranked at the 3rd position out of all submitted runs and awarded the 3rd prize in the contest.

* Kamal Sarkar, [email protected]_CodeMixed-2017: Sentiment Analysis for Indian Code Mixed Social Media Texts, NLP Tool [email protected] 14th International Conference on Natural Language Processing, 2017 
* NLP Tool Contest on Sentiment Analysis for Indian Languages (Code Mixed) held in conjunction with the 14th International Conference on Natural Language Processing, 2017 

  Access Paper or Ask Questions

BERT4GCN: Using BERT Intermediate Layers to Augment GCN for Aspect-based Sentiment Classification

Oct 01, 2021
Zeguan Xiao, Jiarun Wu, Qingliang Chen, Congjian Deng

Graph-based Aspect-based Sentiment Classification (ABSC) approaches have yielded state-of-the-art results, expecially when equipped with contextual word embedding from pre-training language models (PLMs). However, they ignore sequential features of the context and have not yet made the best of PLMs. In this paper, we propose a novel model, BERT4GCN, which integrates the grammatical sequential features from the PLM of BERT, and the syntactic knowledge from dependency graphs. BERT4GCN utilizes outputs from intermediate layers of BERT and positional information between words to augment GCN (Graph Convolutional Network) to better encode the dependency graphs for the downstream classification. Experimental results demonstrate that the proposed BERT4GCN outperforms all state-of-the-art baselines, justifying that augmenting GCN with the grammatical features from intermediate layers of BERT can significantly empower ABSC models.

* To appear in EMNLP 2021, 8 pages, 2 figures 

  Access Paper or Ask Questions

Modeling Inter-Aspect Dependencies with a Non-temporal Mechanism for Aspect-Based Sentiment Analysis

Aug 12, 2020
Yunlong Liang, Fandong Meng, Jinchao Zhang, Yufeng Chen, Jinan Xu, Jie Zhou

For multiple aspects scenario of aspect-based sentiment analysis (ABSA), existing approaches typically ignore inter-aspect relations or rely on temporal dependencies to process aspect-aware representations of all aspects in a sentence. Although multiple aspects of a sentence appear in a non-adjacent sequential order, they are not in a strict temporal relationship as natural language sequence, thus the aspect-aware sentence representations should not be treated as temporal dependency processing. In this paper, we propose a novel non-temporal mechanism to enhance the ABSA task through modeling inter-aspect dependencies. Furthermore, we focus on the well-known class imbalance issue on the ABSA task and address it by down-weighting the loss assigned to well-classified instances. Experiments on two distinct domains of SemEval 2014 task 4 demonstrate the effectiveness of our proposed approach.

* rejected in emnlp-ijcnlp 2019 as a short paper(3/3/3) 

  Access Paper or Ask Questions

Lancaster A at SemEval-2017 Task 5: Evaluation metrics matter: predicting sentiment from financial news headlines

May 01, 2017
Andrew Moore, Paul Rayson

This paper describes our participation in Task 5 track 2 of SemEval 2017 to predict the sentiment of financial news headlines for a specific company on a continuous scale between -1 and 1. We tackled the problem using a number of approaches, utilising a Support Vector Regression (SVR) and a Bidirectional Long Short-Term Memory (BLSTM). We found an improvement of 4-6% using the LSTM model over the SVR and came fourth in the track. We report a number of different evaluations using a finance specific word embedding model and reflect on the effects of using different evaluation metrics.

* 5 pages, to Appear in the Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval 2017), August 2017, Vancouver, BC 

  Access Paper or Ask Questions

Covid-19 Discourse on Twitter: How the Topics, Sentiments, Subjectivity, and Figurative Frames Changed Over Time

Mar 16, 2021
Philipp Wicke, Marianna M. Bolognesi

The words we use to talk about the current epidemiological crisis on social media can inform us on how we are conceptualizing the pandemic and how we are reacting to its development. This paper provides an extensive explorative analysis of how the discourse about Covid-19 reported on Twitter changes through time, focusing on the first wave of this pandemic. Based on an extensive corpus of tweets (produced between 20th March and 1st July 2020) first we show how the topics associated with the development of the pandemic changed through time, using topic modeling. Second, we show how the sentiment polarity of the language used in the tweets changed from a relatively positive valence during the first lockdown, toward a more negative valence in correspondence with the reopening. Third we show how the average subjectivity of the tweets increased linearly and fourth, how the popular and frequently used figurative frame of WAR changed when real riots and fights entered the discourse.

* Frontiers in Communication, Volume: 6, Pages: 45, Year: 2021 

  Access Paper or Ask Questions

Disentangling Aspect and Opinion Words in Target-based Sentiment Analysis using Lifelong Learning

Feb 16, 2018
Shuai Wang, Mianwei Zhou, Sahisnu Mazumder, Bing Liu, Yi Chang

Given a target name, which can be a product aspect or entity, identifying its aspect words and opinion words in a given corpus is a fine-grained task in target-based sentiment analysis (TSA). This task is challenging, especially when we have no labeled data and we want to perform it for any given domain. To address it, we propose a general two-stage approach. Stage one extracts/groups the target-related words (call t-words) for a given target. This is relatively easy as we can apply an existing semantics-based learning technique. Stage two separates the aspect and opinion words from the grouped t-words, which is challenging because we often do not have enough word-level aspect and opinion labels. In this work, we formulate this problem in a PU learning setting and incorporate the idea of lifelong learning to solve it. Experimental results show the effectiveness of our approach.


  Access Paper or Ask Questions

Reed at SemEval-2020 Task 9: Fine-Tuning and Bag-of-Words Approaches to Code-Mixed Sentiment Analysis

Aug 04, 2020
Vinay Gopalan, Mark Hopkins

We explore the task of sentiment analysis on Hinglish (code-mixed Hindi-English) tweets as participants of Task 9 of the SemEval-2020 competition, known as the SentiMix task. We had two main approaches: 1) applying transfer learning by fine-tuning pre-trained BERT models and 2) training feedforward neural networks on bag-of-words representations. During the evaluation phase of the competition, we obtained an F-score of 71.3% with our best model, which placed $4^{th}$ out of 62 entries in the official system rankings.


  Access Paper or Ask Questions

<<
137
138
139
140
141
142
143
144
145
146
147
148
149
>>