Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Sentiment": models, code, and papers

Aspect Extraction and Sentiment Classification of Mobile Apps using App-Store Reviews

Dec 09, 2017
Sharmistha Dey

Understanding of customer sentiment can be useful for product development. On top of that if the priorities for the development order can be known, then development procedure become simpler. This work has tried to address this issue in the mobile app domain. Along with aspect and opinion extraction this work has also categorized the extracted aspects ac-cording to their importance. This can help developers to focus their time and energy at the right place.

* 12 pages 

  Access Paper or Ask Questions

Gradual Machine Learning for Aspect-level Sentiment Analysis

Jul 01, 2019
Yanyan Wang, Qun Chen, Jiquan Shen, Boyi Hou, Murtadha Ahmed, Zhanhuai Li

The state-of-the-art solutions for Aspect-Level Sentiment Analysis (ALSA) were built on a variety of deep neural networks (DNN), whose efficacy depends on large amounts of accurately labeled training data. Unfortunately, high-quality labeled training data usually require expensive manual work, and may thus not be readily available in real scenarios. In this paper, we propose a novel solution for ALSA based on the recently proposed paradigm of gradual machine learning, which can enable effective machine labeling without the requirement for manual labeling effort. It begins with some easy instances in an ALSA task, which can be automatically labeled by the machine with high accuracy, and then gradually labels the more challenging instances by iterative factor graph inference. In the process of gradual machine learning, the hard instances are gradually labeled in small stages based on the estimated evidential certainty provided by the labeled easier instances. Our extensive experiments on the benchmark datasets have shown that the performance of the proposed solution is considerably better than its unsupervised alternatives, and also highly competitive compared to the state-of-the-art supervised DNN techniques.

* arXiv admin note: text overlap with arXiv:1810.12125 

  Access Paper or Ask Questions

Review-Level Sentiment Classification with Sentence-Level Polarity Correction

Nov 07, 2015
Sylvester Olubolu Orimaye, Saadat M. Alhashmi, Eu-Gene Siew, Sang Jung Kang

We propose an effective technique to solving review-level sentiment classification problem by using sentence-level polarity correction. Our polarity correction technique takes into account the consistency of the polarities (positive and negative) of sentences within each product review before performing the actual machine learning task. While sentences with inconsistent polarities are removed, sentences with consistent polarities are used to learn state-of-the-art classifiers. The technique achieved better results on different types of products reviews and outperforms baseline models without the correction technique. Experimental results show an average of 82% F-measure on four different product review domains.

* 15 pages. This paper is based on the same sentence-level technique proposed in Orimaye, S. O., Alhashmi, S. M., and Siew, E. G. Buy it-dont buy it: sentiment classification on Amazon reviews using sentence polarity shift. In PRICAI 2012: Trends in Artificial Intelligence, pp. 386-399. Springer Berlin Heidelberg 

  Access Paper or Ask Questions

CLASSIC: Continual and Contrastive Learning of Aspect Sentiment Classification Tasks

Dec 05, 2021
Zixuan Ke, Bing Liu, Hu Xu, Lei Shu

This paper studies continual learning (CL) of a sequence of aspect sentiment classification(ASC) tasks in a particular CL setting called domain incremental learning (DIL). Each task is from a different domain or product. The DIL setting is particularly suited to ASC because in testing the system needs not know the task/domain to which the test data belongs. To our knowledge, this setting has not been studied before for ASC. This paper proposes a novel model called CLASSIC. The key novelty is a contrastive continual learning method that enables both knowledge transfer across tasks and knowledge distillation from old tasks to the new task, which eliminates the need for task ids in testing. Experimental results show the high effectiveness of CLASSIC.

* EMNLP 2021 

  Access Paper or Ask Questions

A Fair and Comprehensive Comparison of Multimodal Tweet Sentiment Analysis Methods

Jun 16, 2021
Gullal S. Cheema, Sherzod Hakimov, Eric Müller-Budack, Ralph Ewerth

Opinion and sentiment analysis is a vital task to characterize subjective information in social media posts. In this paper, we present a comprehensive experimental evaluation and comparison with six state-of-the-art methods, from which we have re-implemented one of them. In addition, we investigate different textual and visual feature embeddings that cover different aspects of the content, as well as the recently introduced multimodal CLIP embeddings. Experimental results are presented for two different publicly available benchmark datasets of tweets and corresponding images. In contrast to the evaluation methodology of previous work, we introduce a reproducible and fair evaluation scheme to make results comparable. Finally, we conduct an error analysis to outline the limitations of the methods and possibilities for the future work.

* Accepted in Workshop on Multi-ModalPre-Training for Multimedia Understanding (MMPT 2021), co-located with ICMR 2021 

  Access Paper or Ask Questions

HinglishNLP: Fine-tuned Language Models for Hinglish Sentiment Detection

Aug 22, 2020
Meghana Bhange, Nirant Kasliwal

Sentiment analysis for code-mixed social media text continues to be an under-explored area. This work adds two common approaches: fine-tuning large transformer models and sample efficient methods like ULMFiT. Prior work demonstrates the efficacy of classical ML methods for polarity detection. Fine-tuned general-purpose language representation models, such as those of the BERT family are benchmarked along with classical machine learning and ensemble methods. We show that NB-SVM beats RoBERTa by 6.2% (relative) F1. The best performing model is a majority-vote ensemble which achieves an F1 of 0.707. The leaderboard submission was made under the codalab username nirantk, with F1 of 0.689.

* SemEval 2020 

  Access Paper or Ask Questions

Multi-task Learning of Negation and Speculation for Targeted Sentiment Classification

Oct 16, 2020
Andrew Moore, Jeremy Barnes

The majority of work in targeted sentiment analysis has concentrated on finding better methods to improve the overall results. Within this paper we show that these models are not robust to linguistic phenomena, specifically negation and speculation. In this paper, we propose a multi-task learning method to incorporate information from syntactic and semantic auxiliary tasks, including negation and speculation scope detection, to create models that are more robust to these phenomena. Further we create two challenge datasets to evaluate model performance on negated and speculative samples. We find that multi-task models and transfer learning from a language model can improve performance on these challenge datasets. However the results indicate that there is still much room for improvement in making our models more robust to linguistic phenomena such as negation and speculation.


  Access Paper or Ask Questions

Systematic Attack Surface Reduction For Deployed Sentiment Analysis Models

Jun 19, 2020
Josh Kalin, David Noever, Gerry Dozier

This work proposes a structured approach to baselining a model, identifying attack vectors, and securing the machine learning models after deployment. This method for securing each model post deployment is called the BAD (Build, Attack, and Defend) Architecture. Two implementations of the BAD architecture are evaluated to quantify the adversarial life cycle for a black box Sentiment Analysis system. As a challenging diagnostic, the Jigsaw Toxic Bias dataset is selected as the baseline in our performance tool. Each implementation of the architecture will build a baseline performance report, attack a common weakness, and defend the incoming attack. As an important note: each attack surface demonstrated in this work is detectable and preventable. The goal is to demonstrate a viable methodology for securing a machine learning model in a production setting.

* 11 pages, 4 figures, 6th International Conference on Data Mining 

  Access Paper or Ask Questions

Solving Aspect Category Sentiment Analysis as a Text Generation Task

Oct 14, 2021
Jian Liu, Zhiyang Teng, Leyang Cui, Hanmeng Liu, Yue Zhang

Aspect category sentiment analysis has attracted increasing research attention. The dominant methods make use of pre-trained language models by learning effective aspect category-specific representations, and adding specific output layers to its pre-trained representation. We consider a more direct way of making use of pre-trained language models, by casting the ACSA tasks into natural language generation tasks, using natural language sentences to represent the output. Our method allows more direct use of pre-trained knowledge in seq2seq language models by directly following the task setting during pre-training. Experiments on several benchmarks show that our method gives the best reported results, having large advantages in few-shot and zero-shot settings.

* EMNLP 2021 main conference 

  Access Paper or Ask Questions

<<
130
131
132
133
134
135
136
137
138
139
140
141
142
>>