Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Sentiment": models, code, and papers

"Let me convince you to buy my product ... ": A Case Study of an Automated Persuasive System for Fashion Products

Sep 25, 2017
Vitobha Munigala, Srikanth Tamilselvam, Anush Sankaran

Persuasivenes is a creative art aimed at making people believe in certain set of beliefs. Many a times, such creativity is about adapting richness of one domain into another to strike a chord with the target audience. In this research, we present PersuAIDE! - A persuasive system based on linguistic creativity to transform given sentence to generate various forms of persuading sentences. These various forms cover multiple focus of persuasion such as memorability and sentiment. For a given simple product line, the algorithm is composed of several steps including: (i) select an appropriate well-known expression for the target domain to add memorability, (ii) identify keywords and entities in the given sentence and expression and transform it to produce creative persuading sentence, and (iii) adding positive or negative sentiment for further persuasion. The persuasive conversion were manually verified using qualitative results and the effectiveness of the proposed approach is empirically discussed.

* ML4Creativity workshop at SIGKDD 2017 

  Access Paper or Ask Questions

Document Embedding with Paragraph Vectors

Jul 29, 2015
Andrew M. Dai, Christopher Olah, Quoc V. Le

Paragraph Vectors has been recently proposed as an unsupervised method for learning distributed representations for pieces of texts. In their work, the authors showed that the method can learn an embedding of movie review texts which can be leveraged for sentiment analysis. That proof of concept, while encouraging, was rather narrow. Here we consider tasks other than sentiment analysis, provide a more thorough comparison of Paragraph Vectors to other document modelling algorithms such as Latent Dirichlet Allocation, and evaluate performance of the method as we vary the dimensionality of the learned representation. We benchmarked the models on two document similarity data sets, one from Wikipedia, one from arXiv. We observe that the Paragraph Vector method performs significantly better than other methods, and propose a simple improvement to enhance embedding quality. Somewhat surprisingly, we also show that much like word embeddings, vector operations on Paragraph Vectors can perform useful semantic results.

* 8 pages 

  Access Paper or Ask Questions

Probing Speech Emotion Recognition Transformers for Linguistic Knowledge

Apr 01, 2022
Andreas Triantafyllopoulos, Johannes Wagner, Hagen Wierstorf, Maximilian Schmitt, Uwe Reichel, Florian Eyben, Felix Burkhardt, Björn W. Schuller

Large, pre-trained neural networks consisting of self-attention layers (transformers) have recently achieved state-of-the-art results on several speech emotion recognition (SER) datasets. These models are typically pre-trained in self-supervised manner with the goal to improve automatic speech recognition performance -- and thus, to understand linguistic information. In this work, we investigate the extent in which this information is exploited during SER fine-tuning. Using a reproducible methodology based on open-source tools, we synthesise prosodically neutral speech utterances while varying the sentiment of the text. Valence predictions of the transformer model are very reactive to positive and negative sentiment content, as well as negations, but not to intensifiers or reducers, while none of those linguistic features impact arousal or dominance. These findings show that transformers can successfully leverage linguistic information to improve their valence predictions, and that linguistic analysis should be included in their testing.

* This work has been submitted for publication to Interspeech 2022 

  Access Paper or Ask Questions

BLUE at Memotion 2.0 2022: You have my Image, my Text and my Transformer

Feb 28, 2022
Ana-Maria Bucur, Adrian Cosma, Ioan-Bogdan Iordache

Memes are prevalent on the internet and continue to grow and evolve alongside our culture. An automatic understanding of memes propagating on the internet can shed light on the general sentiment and cultural attitudes of people. In this work, we present team BLUE's solution for the second edition of the MEMOTION competition. We showcase two approaches for meme classification (i.e. sentiment, humour, offensive, sarcasm and motivation levels) using a text-only method using BERT, and a Multi-Modal-Multi-Task transformer network that operates on both the meme image and its caption to output the final scores. In both approaches, we leverage state-of-the-art pretrained models for text (BERT, Sentence Transformer) and image processing (EfficientNetV4, CLIP). Through our efforts, we obtain first place in task A, second place in task B and third place in task C. In addition, our team obtained the highest average score for all three tasks.


  Access Paper or Ask Questions

Knowledge Distillation for BERT Unsupervised Domain Adaptation

Oct 22, 2020
Minho Ryu, Kichun Lee

A pre-trained language model, BERT, has brought significant performance improvements across a range of natural language processing tasks. Since the model is trained on a large corpus of diverse topics, it shows robust performance for domain shift problems in which data distributions at training (source data) and testing (target data) differ while sharing similarities. Despite its great improvements compared to previous models, it still suffers from performance degradation due to domain shifts. To mitigate such problems, we propose a simple but effective unsupervised domain adaptation method, \emph{adversarial adaptation with distillation} (AAD), which combines the adversarial discriminative domain adaptation (ADDA) framework with knowledge distillation. We evaluate our approach in the task of cross-domain sentiment classification on 30 domain pairs, advancing the state-of-the-art performance for unsupervised domain adaptation in text sentiment classification.


  Access Paper or Ask Questions

Text Mining Customer Reviews For Aspect-based Restaurant Rating

Jan 07, 2019
Jovelyn C. Cuizon, Jesserine Lopez, Danica Rose Jones

This study applies text mining to analyze customer reviews and automatically assign a collective restaurant star rating based on five predetermined aspects: ambiance, cost, food, hygiene, and service. The application provides a web and mobile crowd sourcing platform where users share dining experiences and get insights about the strengths and weaknesses of a restaurant through user contributed feedback. Text reviews are tokenized into sentences. Noun-adjective pairs are extracted from each sentence using Stanford Core NLP library and are associated to aspects based on the bag of associated words fed into the system. The sentiment weight of the adjectives is determined through AFINN library. An overall restaurant star rating is computed based on the individual aspect rating. Further, a word cloud is generated to provide visual display of the most frequently occurring terms in the reviews. The more feedbacks are added the more reflective the sentiment score to the restaurants' performance.

* International Journal of Computer Science & Information Technology (IJCSIT) Vol 10, No 6, December 2018 

  Access Paper or Ask Questions

Dual Memory Network Model for Biased Product Review Classification

Sep 16, 2018
Yunfei Long, Mingyu Ma, Qin Lu, Rong Xiang, Chu-Ren Huang

In sentiment analysis (SA) of product reviews, both user and product information are proven to be useful. Current tasks handle user profile and product information in a unified model which may not be able to learn salient features of users and products effectively. In this work, we propose a dual user and product memory network (DUPMN) model to learn user profiles and product reviews using separate memory networks. Then, the two representations are used jointly for sentiment prediction. The use of separate models aims to capture user profiles and product information more effectively. Compared to state-of-the-art unified prediction models, the evaluations on three benchmark datasets, IMDB, Yelp13, and Yelp14, show that our dual learning model gives performance gain of 0.6%, 1.2%, and 0.9%, respectively. The improvements are also deemed very significant measured by p-values.

* To appear in 2018 EMNLP 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis 

  Access Paper or Ask Questions

Molding CNNs for text: non-linear, non-consecutive convolutions

Aug 18, 2015
Tao Lei, Regina Barzilay, Tommi Jaakkola

The success of deep learning often derives from well-chosen operational building blocks. In this work, we revise the temporal convolution operation in CNNs to better adapt it to text processing. Instead of concatenating word representations, we appeal to tensor algebra and use low-rank n-gram tensors to directly exploit interactions between words already at the convolution stage. Moreover, we extend the n-gram convolution to non-consecutive words to recognize patterns with intervening words. Through a combination of low-rank tensors, and pattern weighting, we can efficiently evaluate the resulting convolution operation via dynamic programming. We test the resulting architecture on standard sentiment classification and news categorization tasks. Our model achieves state-of-the-art performance both in terms of accuracy and training speed. For instance, we obtain 51.2% accuracy on the fine-grained sentiment classification task.


  Access Paper or Ask Questions

Evaluating Recurrent Neural Network Explanations

Jun 04, 2019
Leila Arras, Ahmed Osman, Klaus-Robert Müller, Wojciech Samek

Recently, several methods have been proposed to explain the predictions of recurrent neural networks (RNNs), in particular of LSTMs. The goal of these methods is to understand the network's decisions by assigning to each input variable, e.g., a word, a relevance indicating to which extent it contributed to a particular prediction. In previous works, some of these methods were not yet compared to one another, or were evaluated only qualitatively. We close this gap by systematically and quantitatively comparing these methods in different settings, namely (1) a toy arithmetic task which we use as a sanity check, (2) a five-class sentiment prediction of movie reviews, and besides (3) we explore the usefulness of word relevances to build sentence-level representations. Lastly, using the method that performed best in our experiments, we show how specific linguistic phenomena such as the negation in sentiment analysis reflect in terms of relevance patterns, and how the relevance visualization can help to understand the misclassification of individual samples.

* 14 pages, accepted for ACL'19 Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP 

  Access Paper or Ask Questions

<<
108
109
110
111
112
113
114
115
116
117
118
119
120
>>