Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Sentiment": models, code, and papers

Using Random Perturbations to Mitigate Adversarial Attacks on Sentiment Analysis Models

Feb 11, 2022
Abigail Swenor, Jugal Kalita

Attacks on deep learning models are often difficult to identify and therefore are difficult to protect against. This problem is exacerbated by the use of public datasets that typically are not manually inspected before use. In this paper, we offer a solution to this vulnerability by using, during testing, random perturbations such as spelling correction if necessary, substitution by random synonym, or simply dropping the word. These perturbations are applied to random words in random sentences to defend NLP models against adversarial attacks. Our Random Perturbations Defense and Increased Randomness Defense methods are successful in returning attacked models to similar accuracy of models before attacks. The original accuracy of the model used in this work is 80% for sentiment classification. After undergoing attacks, the accuracy drops to accuracy between 0% and 44%. After applying our defense methods, the accuracy of the model is returned to the original accuracy within statistical significance.

* To be published in the proceedings for the 18th International Conference on Natural Language Processing (ICON 2021) 

  Access Paper or Ask Questions

Recursive Neural Conditional Random Fields for Aspect-based Sentiment Analysis

Sep 19, 2016
Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, Xiaokui Xiao

In aspect-based sentiment analysis, extracting aspect terms along with the opinions being expressed from user-generated content is one of the most important subtasks. Previous studies have shown that exploiting connections between aspect and opinion terms is promising for this task. In this paper, we propose a novel joint model that integrates recursive neural networks and conditional random fields into a unified framework for explicit aspect and opinion terms co-extraction. The proposed model learns high-level discriminative features and double propagate information between aspect and opinion terms, simultaneously. Moreover, it is flexible to incorporate hand-crafted features into the proposed model to further boost its information extraction performance. Experimental results on the SemEval Challenge 2014 dataset show the superiority of our proposed model over several baseline methods as well as the winning systems of the challenge.


  Access Paper or Ask Questions

BB_twtr at SemEval-2017 Task 4: Twitter Sentiment Analysis with CNNs and LSTMs

Apr 20, 2017
Mathieu Cliche

In this paper we describe our attempt at producing a state-of-the-art Twitter sentiment classifier using Convolutional Neural Networks (CNNs) and Long Short Term Memory (LSTMs) networks. Our system leverages a large amount of unlabeled data to pre-train word embeddings. We then use a subset of the unlabeled data to fine tune the embeddings using distant supervision. The final CNNs and LSTMs are trained on the SemEval-2017 Twitter dataset where the embeddings are fined tuned again. To boost performances we ensemble several CNNs and LSTMs together. Our approach achieved first rank on all of the five English subtasks amongst 40 teams.

* Published in Proceedings of SemEval-2017, 8 pages 

  Access Paper or Ask Questions

Summarizing Opinions: Aspect Extraction Meets Sentiment Prediction and They Are Both Weakly Supervised

Aug 27, 2018
Stefanos Angelidis, Mirella Lapata

We present a neural framework for opinion summarization from online product reviews which is knowledge-lean and only requires light supervision (e.g., in the form of product domain labels and user-provided ratings). Our method combines two weakly supervised components to identify salient opinions and form extractive summaries from multiple reviews: an aspect extractor trained under a multi-task objective, and a sentiment predictor based on multiple instance learning. We introduce an opinion summarization dataset that includes a training set of product reviews from six diverse domains and human-annotated development and test sets with gold standard aspect annotations, salience labels, and opinion summaries. Automatic evaluation shows significant improvements over baselines, and a large-scale study indicates that our opinion summaries are preferred by human judges according to multiple criteria.

* In EMNLP 2018 (long paper). For supplementary material, see http://stangelid.github.io/supplemental.pdf 

  Access Paper or Ask Questions

Deep Multi-Task Model for Sarcasm Detection and Sentiment Analysis in Arabic Language

Jun 23, 2021
Abdelkader El Mahdaouy, Abdellah El Mekki, Kabil Essefar, Nabil El Mamoun, Ismail Berrada, Ahmed Khoumsi

The prominence of figurative language devices, such as sarcasm and irony, poses serious challenges for Arabic Sentiment Analysis (SA). While previous research works tackle SA and sarcasm detection separately, this paper introduces an end-to-end deep Multi-Task Learning (MTL) model, allowing knowledge interaction between the two tasks. Our MTL model's architecture consists of a Bidirectional Encoder Representation from Transformers (BERT) model, a multi-task attention interaction module, and two task classifiers. The overall obtained results show that our proposed model outperforms its single-task counterparts on both SA and sarcasm detection sub-tasks.


  Access Paper or Ask Questions

Cross-Domain Sentiment Classification with Contrastive Learning and Mutual Information Maximization

Nov 12, 2020
Tian Li, Xiang Chen, Shanghang Zhang, Zhen Dong, Kurt Keutzer

Contrastive learning (CL) has been successful as a powerful representation learning method. In this work we propose CLIM: Contrastive Learning with mutual Information Maximization, to explore the potential of CL on cross-domain sentiment classification. To the best of our knowledge, CLIM is the first to adopt contrastive learning for natural language processing (NLP) tasks across domains. Due to scarcity of labels on the target domain, we introduce mutual information maximization (MIM) apart from CL to exploit the features that best support the final prediction. Furthermore, MIM is able to maintain a relatively balanced distribution of the model's prediction, and enlarges the margin between classes on the target domain. The larger margin increases our model's robustness and enables the same classifier to be optimal across domains. Consequently, we achieve new state-of-the-art results on the Amazon-review dataset as well as the airlines dataset, showing the efficacy of our proposed method CLIM.

* 5 pages, 1 figure 

  Access Paper or Ask Questions

Cross-Domain Sentiment Classification With Contrastive Learning and Mutual Information Maximization

Oct 30, 2020
Tian Li, Xiang Chen, Shanghang Zhang, Zhen Dong, Kurt Keutzer

Contrastive learning (CL) has been successful as a powerful representation learning method. In this work we propose CLIM: Contrastive Learning with mutual Information Maximization, to explore the potential of CL on cross-domain sentiment classification. To the best of our knowledge, CLIM is the first to adopt contrastive learning for natural language processing (NLP) tasks across domains. Due to scarcity of labels on the target domain, we introduce mutual information maximization (MIM) apart from CL to exploit the features that best support the final prediction. Furthermore, MIM is able to maintain a relatively balanced distribution of the model's prediction, and enlarges the margin between classes on the target domain. The larger margin increases our model's robustness and enables the same classifier to be optimal across domains. Consequently, we achieve new state-of-the-art results on the Amazon-review dataset as well as the airlines dataset, showing the efficacy of our proposed method CLIM.


  Access Paper or Ask Questions

Audio-Visual Sentiment Analysis for Learning Emotional Arcs in Movies

Dec 08, 2017
Eric Chu, Deb Roy

Stories can have tremendous power -- not only useful for entertainment, they can activate our interests and mobilize our actions. The degree to which a story resonates with its audience may be in part reflected in the emotional journey it takes the audience upon. In this paper, we use machine learning methods to construct emotional arcs in movies, calculate families of arcs, and demonstrate the ability for certain arcs to predict audience engagement. The system is applied to Hollywood films and high quality shorts found on the web. We begin by using deep convolutional neural networks for audio and visual sentiment analysis. These models are trained on both new and existing large-scale datasets, after which they can be used to compute separate audio and visual emotional arcs. We then crowdsource annotations for 30-second video clips extracted from highs and lows in the arcs in order to assess the micro-level precision of the system, with precision measured in terms of agreement in polarity between the system's predictions and annotators' ratings. These annotations are also used to combine the audio and visual predictions. Next, we look at macro-level characterizations of movies by investigating whether there exist `universal shapes' of emotional arcs. In particular, we develop a clustering approach to discover distinct classes of emotional arcs. Finally, we show on a sample corpus of short web videos that certain emotional arcs are statistically significant predictors of the number of comments a video receives. These results suggest that the emotional arcs learned by our approach successfully represent macroscopic aspects of a video story that drive audience engagement. Such machine understanding could be used to predict audience reactions to video stories, ultimately improving our ability as storytellers to communicate with each other.

* Data Mining (ICDM), 2017 IEEE 17th International Conference on 

  Access Paper or Ask Questions

Ranking Transfer Languages with Pragmatically-Motivated Features for Multilingual Sentiment Analysis

Jun 16, 2020
Jimin Sun, Hwijeen Ahn, Chan Young Park, Yulia Tsvetkov, David R. Mortensen

Cross-lingual transfer learning studies how datasets, annotations, and models can be transferred from resource-rich languages to improve language technologies in resource-poor settings. Recent works have shown that we can further benefit from the selection of the best transfer language. In this paper, we propose three pragmatically-motivated features that can help guide the optimal transfer language selection problem for cross-lingual transfer. Specifically, the proposed features operationalize cross-cultural similarities that manifest in various linguistic patterns: language context-level, sharing multi-word expressions, and the use of emotion concepts. Our experimental results show that these features significantly improve the prediction of optimal transfer languages over baselines in sentiment analysis, but are less useful for dependency parsing. Further analyses show that the proposed features indeed capture the intended cross-cultural similarities and align well with existing work in sociolinguistics and linguistic anthropology.


  Access Paper or Ask Questions

Can you tell? SSNet -- a Sagittal Stratum-inspired Neural Network Framework for Sentiment Analysis

Jun 23, 2020
Apostol Vassilev, Munawar Hasan

When people try to understand nuanced language they typically process multiple input sensor modalities to complete this cognitive task. It turns out the human brain has even a specialized neuron formation, called sagittal stratum, to help us understand sarcasm. We use this biological formation as the inspiration for designing a neural network architecture that combines predictions of different models on the same text to construct a robust, accurate and computationally efficient classifier for sentiment analysis. Experimental results on representative benchmark datasets and comparisons to other methods1show the advantages of the new network architecture.

* 11 pages, 6 figures, 2 tables, 21 references 

  Access Paper or Ask Questions

<<
133
134
135
136
137
138
139
140
141
142
143
144
145
>>