Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Sentiment": models, code, and papers

Deriving Verb Predicates By Clustering Verbs with Arguments

Aug 01, 2017
Joao Sedoc, Derry Wijaya, Masoud Rouhizadeh, Andy Schwartz, Lyle Ungar

Hand-built verb clusters such as the widely used Levin classes (Levin, 1993) have proved useful, but have limited coverage. Verb classes automatically induced from corpus data such as those from VerbKB (Wijaya, 2016), on the other hand, can give clusters with much larger coverage, and can be adapted to specific corpora such as Twitter. We present a method for clustering the outputs of VerbKB: verbs with their multiple argument types, e.g. "marry(person, person)", "feel(person, emotion)." We make use of a novel low-dimensional embedding of verbs and their arguments to produce high quality clusters in which the same verb can be in different clusters depending on its argument type. The resulting verb clusters do a better job than hand-built clusters of predicting sarcasm, sentiment, and locus of control in tweets.


  Access Paper or Ask Questions

Multi-task memory networks for category-specific aspect and opinion terms co-extraction

Jun 05, 2017
Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier

In aspect-based sentiment analysis, most existing methods either focus on aspect/opinion terms extraction or aspect terms categorization. However, each task by itself only provides partial information to end users. To generate more detailed and structured opinion analysis, we propose a finer-grained problem, which we call category-specific aspect and opinion terms extraction. This problem involves the identification of aspect and opinion terms within each sentence, as well as the categorization of the identified terms. To this end, we propose an end-to-end multi-task attention model, where each task corresponds to aspect/opinion terms extraction for a specific category. Our model benefits from exploring the commonalities and relationships among different tasks to address the data sparsity issue. We demonstrate its state-of-the-art performance on three benchmark datasets.


  Access Paper or Ask Questions

Context-aware Natural Language Generation with Recurrent Neural Networks

Nov 29, 2016
Jian Tang, Yifan Yang, Sam Carton, Ming Zhang, Qiaozhu Mei

This paper studied generating natural languages at particular contexts or situations. We proposed two novel approaches which encode the contexts into a continuous semantic representation and then decode the semantic representation into text sequences with recurrent neural networks. During decoding, the context information are attended through a gating mechanism, addressing the problem of long-range dependency caused by lengthy sequences. We evaluate the effectiveness of the proposed approaches on user review data, in which rich contexts are available and two informative contexts, sentiments and products, are selected for evaluation. Experiments show that the fake reviews generated by our approaches are very natural. Results of fake review detection with human judges show that more than 50\% of the fake reviews are misclassified as the real reviews, and more than 90\% are misclassified by existing state-of-the-art fake review detection algorithm.


  Access Paper or Ask Questions

OmniGraph: Rich Representation and Graph Kernel Learning

Oct 10, 2015
Boyi Xie, Rebecca J. Passonneau

OmniGraph, a novel representation to support a range of NLP classification tasks, integrates lexical items, syntactic dependencies and frame semantic parses into graphs. Feature engineering is folded into the learning through convolution graph kernel learning to explore different extents of the graph. A high-dimensional space of features includes individual nodes as well as complex subgraphs. In experiments on a text-forecasting problem that predicts stock price change from news for company mentions, OmniGraph beats several benchmarks based on bag-of-words, syntactic dependencies, and semantic trees. The highly expressive features OmniGraph discovers provide insights into the semantics across distinct market sectors. To demonstrate the method's generality, we also report its high performance results on a fine-grained sentiment corpus.


  Access Paper or Ask Questions

Adversarial Black-Box Attacks On Text Classifiers Using Multi-Objective Genetic Optimization Guided By Deep Networks

Nov 10, 2020
Alex Mathai, Shreya Khare, Srikanth Tamilselvam, Senthil Mani

We propose a novel genetic-algorithm technique that generates black-box adversarial examples which successfully fool neural network based text classifiers. We perform a genetic search with multi-objective optimization guided by deep learning based inferences and Seq2Seq mutation to generate semantically similar but imperceptible adversaries. We compare our approach with DeepWordBug (DWB) on SST and IMDB sentiment datasets by attacking three trained models viz. char-LSTM, word-LSTM and elmo-LSTM. On an average, we achieve an attack success rate of 65.67% for SST and 36.45% for IMDB across the three models showing an improvement of 49.48% and 101% respectively. Furthermore, our qualitative study indicates that 94% of the time, the users were not able to distinguish between an original and adversarial sample.


  Access Paper or Ask Questions

Understand customer reviews with less data and in short time: pretrained language representation and active learning

Oct 29, 2019
Yanwei Cui, Xavier Illy

In this paper, we address customer review understanding problems by using supervised machine learning approaches, in order to achieve a fully automatic review aspects categorisation and sentiment analysis. In general, such supervised learning algorithms require domain-specific expert knowledge for generating high quality labeled training data, and the cost of labeling can be very high. To achieve an in-production customer review machine learning enabled analysis tool with only a limited amount of data and within a reasonable training data collection time, we propose to use pre-trained language representation to boost model performance and active learning framework for accelerating the iterative training process. The results show that with integration of both components, the fully automatic review analysis can be achieved at a much faster pace.


  Access Paper or Ask Questions

Mockingjay: Unsupervised Speech Representation Learning with Deep Bidirectional Transformer Encoders

Oct 25, 2019
Andy T. Liu, Shu-wen Yang, Po-Han Chi, Po-chun Hsu, Hung-yi Lee

We present Mockingjay as a new speech representation learning approach, where bidirectional Transformer encoders are pre-trained on a large amount of unlabeled speech. Previous speech representation methods learn through conditioning on past frames and predicting information about future frames. Whereas Mockingjay is designed to predict the current frame through jointly conditioning on both past and future contexts. The Mockingjay representation improves performance for a wide range of downstream tasks, including phoneme classification, speaker recognition, and sentiment classification on spoken content, while outperforming other approaches. Mockingjay is empirically powerful and can be fine-tuned with downstream models, with only 2 epochs we further improve performance dramatically. In a low resource setting with only 0.1% of labeled data, we outperform the result of Mel-features that uses all 100% labeled data.

* Submitted to ICASSP 2020 

  Access Paper or Ask Questions

DepecheMood++: a Bilingual Emotion Lexicon Built Through Simple Yet Powerful Techniques

Oct 08, 2018
Oscar Araque, Lorenzo Gatti, Jacopo Staiano, Marco Guerini

Several lexica for sentiment analysis have been developed and made available in the NLP community. While most of these come with word polarity annotations (e.g. positive/negative), attempts at building lexica for finer-grained emotion analysis (e.g. happiness, sadness) have recently attracted significant attention. Such lexica are often exploited as a building block in the process of developing learning models for which emotion recognition is needed, and/or used as baselines to which compare the performance of the models. In this work, we contribute two new resources to the community: a) an extension of an existing and widely used emotion lexicon for English; and b) a novel version of the lexicon targeting Italian. Furthermore, we show how simple techniques can be used, both in supervised and unsupervised experimental settings, to boost performances on datasets and tasks of varying degree of domain-specificity.

* 12 pages, 2 figures 

  Access Paper or Ask Questions

Enhancing Sentence Embedding with Generalized Pooling

Jun 26, 2018
Qian Chen, Zhen-Hua Ling, Xiaodan Zhu

Pooling is an essential component of a wide variety of sentence representation and embedding models. This paper explores generalized pooling methods to enhance sentence embedding. We propose vector-based multi-head attention that includes the widely used max pooling, mean pooling, and scalar self-attention as special cases. The model benefits from properly designed penalization terms to reduce redundancy in multi-head attention. We evaluate the proposed model on three different tasks: natural language inference (NLI), author profiling, and sentiment classification. The experiments show that the proposed model achieves significant improvement over strong sentence-encoding-based methods, resulting in state-of-the-art performances on four datasets. The proposed approach can be easily implemented for more problems than we discuss in this paper.

* Accepted by COLING 2018 

  Access Paper or Ask Questions

Robust Unsupervised Domain Adaptation for Neural Networks via Moment Alignment

May 28, 2018
Werner Zellinger, Bernhard A. Moser, Thomas Grubinger, Edwin Lughofer, Thomas Natschläger, Susanne Saminger-Platz

A novel approach for unsupervised domain adaptation for neural networks is proposed that relies on metric-based regularization of the learning process. The metric-based regularization aims at domain-invariant latent feature representations by means of maximizing the similarity between domain-specific activation distributions. The proposed metric results from modifying an integral probability metric such that it becomes translation-invariant on a polynomial function space. The metric has an intuitive interpretation in the dual space as the sum of differences of higher order central moments of the corresponding activation distributions. Error minimization guarantees are proven for the continuous case. As demonstrated by an analysis of standard benchmark experiments for sentiment analysis, object recognition and digit recognition, the outlined approach is robust regarding parameter changes and achieves higher classification accuracies than comparable approaches.


  Access Paper or Ask Questions

<<
167
168
169
170
171
172
173
174
175
176
177
178
179
>>