Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Sentiment": models, code, and papers

Multi-Task Text Classification using Graph Convolutional Networks for Large-Scale Low Resource Language

May 02, 2022
Mounika Marreddy, Subba Reddy Oota, Lakshmi Sireesha Vakada, Venkata Charan Chinni, Radhika Mamidi

Graph Convolutional Networks (GCN) have achieved state-of-art results on single text classification tasks like sentiment analysis, emotion detection, etc. However, the performance is achieved by testing and reporting on resource-rich languages like English. Applying GCN for multi-task text classification is an unexplored area. Moreover, training a GCN or adopting an English GCN for Indian languages is often limited by data availability, rich morphological variation, syntax, and semantic differences. In this paper, we study the use of GCN for the Telugu language in single and multi-task settings for four natural language processing (NLP) tasks, viz. sentiment analysis (SA), emotion identification (EI), hate-speech (HS), and sarcasm detection (SAR). In order to evaluate the performance of GCN with one of the Indian languages, Telugu, we analyze the GCN based models with extensive experiments on four downstream tasks. In addition, we created an annotated Telugu dataset, TEL-NLP, for the four NLP tasks. Further, we propose a supervised graph reconstruction method, Multi-Task Text GCN (MT-Text GCN) on the Telugu that leverages to simultaneously (i) learn the low-dimensional word and sentence graph embeddings from word-sentence graph reconstruction using graph autoencoder (GAE) and (ii) perform multi-task text classification using these latent sentence graph embeddings. We argue that our proposed MT-Text GCN achieves significant improvements on TEL-NLP over existing Telugu pretrained word embeddings, and multilingual pretrained Transformer models: mBERT, and XLM-R. On TEL-NLP, we achieve a high F1-score for four NLP tasks: SA (0.84), EI (0.55), HS (0.83) and SAR (0.66). Finally, we show our model's quantitative and qualitative analysis on the four NLP tasks in Telugu.

* 9 pages, 6 figures 

  Access Paper or Ask Questions

Stock Market Forecasting Based on Text Mining Technology: A Support Vector Machine Method

Sep 27, 2019
Yancong Xie, Hongxun Jiang

News items have a significant impact on stock markets but the ways are obscure. Many previous works have aimed at finding accurate stock market forecasting models. In this paper, we use text mining and sentiment analysis on Chinese online financial news, to predict Chinese stock tendency and stock prices based on support vector machine (SVM). Firstly, we collect 2,302,692 news items, which date from 1/1/2008 to 1/1/2015. Secondly, based on this dataset, a specific domain stop-word dictionary and a precise sentiment dictionary are formed. Thirdly, we propose a forecasting model using SVM. On the algorithm of SVM implementation, we also propose two-parameter optimization algorithms to search for the best initial parameter setting. The result shows that parameter G has the main effect, while parameter C's effect is not obvious. Furthermore, support vector regression (SVR) models for different Chinese stocks are similar whereas in support vector classification (SVC) models best parameters are quite differential. Series of contrast experiments show that: a) News has significant influence on stock market; b) Expansion input vector for additional situations when that day has no news data is better than normal input in SVR, yet is worse in SVC; c) SVR shows a fantastic degree of fitting in predicting stock fluctuation while such result has some time lag; d) News effect time lag for stock market is less than two days; e) In SVC, historic stock data has a most efficient time lag which is about 10 days, whereas in SVR this effect is not obvious. Besides, based on the special structure of the input vector, we also design a method to calculate the financial source impact factor. Result suggests that the news quality and audience number both have a significant effect on the source impact factor. Besides, for Chinese investors, traditional media has more influence than digital media.

* J. Comp. 12 (2017) 500-510 
* 11 pages, 10 figures, 5 tables 

  Access Paper or Ask Questions

Sentiment Predictability for Stocks

Jan 18, 2018
Jordan Prosky, Xingyou Song, Andrew Tan, Michael Zhao

In this work, we present our findings and experiments for stock-market prediction using various textual sentiment analysis tools, such as mood analysis and event extraction, as well as prediction models, such as LSTMs and specific convolutional architectures.

* 9 pages 

  Access Paper or Ask Questions

A Perspective on Sentiment Analysis

Jul 25, 2016
K Paramesha, K C Ravishankar

Sentiment Analysis (SA) is indeed a fascinating area of research which has stolen the attention of researchers as it has many facets and more importantly it promises economic stakes in the corporate and governance sector. SA has been stemmed out of text analytics and established itself as a separate identity and a domain of research. The wide ranging results of SA have proved to influence the way some critical decisions are taken. Hence, it has become relevant in thorough understanding of the different dimensions of the input, output and the processes and approaches of SA.

* Proceedings of ERCICA 2014 - Emerging Research in Computing Information Communication and Applications, Vol. 1, Elsevier, NMIT, Bengaluru,India, 2014, pp. 412-418 
* Opinion;Feature Engineering; Sentiment;Subjective Sentence;Objective;Contextual Polarity;Sentiment Lexicon;Classification;Machine learning 

  Access Paper or Ask Questions

Group Visual Sentiment Analysis

Jan 07, 2017
Zeshan Hussain, Tariq Patanam, Hardie Cate

In this paper, we introduce a framework for classifying images according to high-level sentiment. We subdivide the task into three primary problems: emotion classification on faces, human pose estimation, and 3D estimation and clustering of groups of people. We introduce novel algorithms for matching body parts to a common individual and clustering people in images based on physical location and orientation. Our results outperform several baseline approaches.

* 7 pages 

  Access Paper or Ask Questions

Sentiment Classification of Food Reviews

Sep 07, 2016
Hua Feng, Ruixi Lin

Sentiment analysis of reviews is a popular task in natural language processing. In this work, the goal is to predict the score of food reviews on a scale of 1 to 5 with two recurrent neural networks that are carefully tuned. As for baseline, we train a simple RNN for classification. Then we extend the baseline to GRU. In addition, we present two different methods to deal with highly skewed data, which is a common problem for reviews. Models are evaluated using accuracies.


  Access Paper or Ask Questions

Sentiment and Sarcasm Classification with Multitask Learning

Jan 23, 2019
Navonil Majumder, Soujanya Poria, Haiyun Peng, Niyati Chhaya, Erik Cambria, Alexander Gelbukh

Sentiment classification and sarcasm detection are both important NLP tasks. We show that these two tasks are correlated, and present a multi-task learning-based framework using deep neural network that models this correlation to improve the performance of both tasks in a multi-task learning setting.


  Access Paper or Ask Questions

Cross-Lingual Sentiment Quantification

Apr 16, 2019
Andrea Esuli, Alejandro Moreo, Fabrizio Sebastiani

We discuss \emph{Cross-Lingual Text Quantification} (CLTQ), the task of performing text quantification (i.e., estimating the relative frequency $p_{c}(D)$ of all classes $c\in\mathcal{C}$ in a set $D$ of unlabelled documents) when training documents are available for a source language $\mathcal{S}$ but not for the target language $\mathcal{T}$ for which quantification needs to be performed. CLTQ has never been discussed before in the literature; we establish baseline results for the binary case by combining state-of-the-art quantification methods with methods capable of generating cross-lingual vectorial representations of the source and target documents involved. We present experimental results obtained on publicly available datasets for cross-lingual sentiment classification; the results show that the presented methods can perform CLTQ with a surprising level of accuracy.


  Access Paper or Ask Questions

Semisupervised Autoencoder for Sentiment Analysis

Dec 14, 2015
Shuangfei Zhai, Zhongfei Zhang

In this paper, we investigate the usage of autoencoders in modeling textual data. Traditional autoencoders suffer from at least two aspects: scalability with the high dimensionality of vocabulary size and dealing with task-irrelevant words. We address this problem by introducing supervision via the loss function of autoencoders. In particular, we first train a linear classifier on the labeled data, then define a loss for the autoencoder with the weights learned from the linear classifier. To reduce the bias brought by one single classifier, we define a posterior probability distribution on the weights of the classifier, and derive the marginalized loss of the autoencoder with Laplace approximation. We show that our choice of loss function can be rationalized from the perspective of Bregman Divergence, which justifies the soundness of our model. We evaluate the effectiveness of our model on six sentiment analysis datasets, and show that our model significantly outperforms all the competing methods with respect to classification accuracy. We also show that our model is able to take advantage of unlabeled dataset and get improved performance. We further show that our model successfully learns highly discriminative feature maps, which explains its superior performance.

* To appear in AAAI 2016 

  Access Paper or Ask Questions

Variational Fusion for Multimodal Sentiment Analysis

Aug 13, 2019
Navonil Majumder, Soujanya Poria, Gangeshwar Krishnamurthy, Niyati Chhaya, Rada Mihalcea, Alexander Gelbukh

Multimodal fusion is considered a key step in multimodal tasks such as sentiment analysis, emotion detection, question answering, and others. Most of the recent work on multimodal fusion does not guarantee the fidelity of the multimodal representation with respect to the unimodal representations. In this paper, we propose a variational autoencoder-based approach for modality fusion that minimizes information loss between unimodal and multimodal representations. We empirically show that this method outperforms the state-of-the-art methods by a significant margin on several popular datasets.


  Access Paper or Ask Questions

<<
122
123
124
125
126
127
128
129
130
131
132
133
134
>>