Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Sentiment": models, code, and papers

Ballpark Learning: Estimating Labels from Rough Group Comparisons

Jun 30, 2016
Tom Hope, Dafna Shahaf

We are interested in estimating individual labels given only coarse, aggregated signal over the data points. In our setting, we receive sets ("bags") of unlabeled instances with constraints on label proportions. We relax the unrealistic assumption of known label proportions, made in previous work; instead, we assume only to have upper and lower bounds, and constraints on bag differences. We motivate the problem, propose an intuitive formulation and algorithm, and apply our methods to real-world scenarios. Across several domains, we show how using only proportion constraints and no labeled examples, we can achieve surprisingly high accuracy. In particular, we demonstrate how to predict income level using rough stereotypes and how to perform sentiment analysis using very little information. We also apply our method to guide exploratory analysis, recovering geographical differences in twitter dialect.

* To appear in the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery (ECML-PKDD) 2016 

  Access Paper or Ask Questions

openXBOW - Introducing the Passau Open-Source Crossmodal Bag-of-Words Toolkit

May 22, 2016
Maximilian Schmitt, Björn W. Schuller

We introduce openXBOW, an open-source toolkit for the generation of bag-of-words (BoW) representations from multimodal input. In the BoW principle, word histograms were first used as features in document classification, but the idea was and can easily be adapted to, e.g., acoustic or visual low-level descriptors, introducing a prior step of vector quantisation. The openXBOW toolkit supports arbitrary numeric input features and text input and concatenates computed subbags to a final bag. It provides a variety of extensions and options. To our knowledge, openXBOW is the first publicly available toolkit for the generation of crossmodal bags-of-words. The capabilities of the tool are exemplified in two sample scenarios: time-continuous speech-based emotion recognition and sentiment analysis in tweets where improved results over other feature representation forms were observed.

* 9 pages, 1 figure, pre-print 

  Access Paper or Ask Questions

Construction of Vietnamese SentiWordNet by using Vietnamese Dictionary

Dec 27, 2014
Xuan-Son Vu, Seong-Bae Park

SentiWordNet is an important lexical resource supporting sentiment analysis in opinion mining applications. In this paper, we propose a novel approach to construct a Vietnamese SentiWordNet (VSWN). SentiWordNet is typically generated from WordNet in which each synset has numerical scores to indicate its opinion polarities. Many previous studies obtained these scores by applying a machine learning method to WordNet. However, Vietnamese WordNet is not available unfortunately by the time of this paper. Therefore, we propose a method to construct VSWN from a Vietnamese dictionary, not from WordNet. We show the effectiveness of the proposed method by generating a VSWN with 39,561 synsets automatically. The method is experimentally tested with 266 synsets with aspect of positivity and negativity. It attains a competitive result compared with English SentiWordNet that is 0.066 and 0.052 differences for positivity and negativity sets respectively.

* The 40th Conference of the Korea Information Processing Society, pp. 745-748, April 2014, South Korea 
* accepted on April-9th-2014, best paper award 

  Access Paper or Ask Questions

Summarizing Reviews with Variable-length Syntactic Patterns and Topic Models

Nov 21, 2012
Trung V. Nguyen, Alice H. Oh

We present a novel summarization framework for reviews of products and services by selecting informative and concise text segments from the reviews. Our method consists of two major steps. First, we identify five frequently occurring variable-length syntactic patterns and use them to extract candidate segments. Then we use the output of a joint generative sentiment topic model to filter out the non-informative segments. We verify the proposed method with quantitative and qualitative experiments. In a quantitative study, our approach outperforms previous methods in producing informative segments and summaries that capture aspects of products and services as expressed in the user-generated pros and cons lists. Our user study with ninety users resonates with this result: individual segments extracted and filtered by our method are rated as more useful by users compared to previous approaches by users.


  Access Paper or Ask Questions

User Guide for KOTE: Korean Online Comments Emotions Dataset

May 11, 2022
Duyoung Jeon, Junho Lee, Cheongtag Kim

Sentiment analysis that classifies data into positive or negative has been dominantly used to recognize emotional aspects of texts, despite the deficit of thorough examination of emotional meanings. Recently, corpora labeled with more than just valence are built to exceed this limit. However, most Korean emotion corpora are small in the number of instances and cover a limited range of emotions. We introduce KOTE dataset. KOTE contains 50k (250k cases) Korean online comments, each of which is manually labeled for 43 emotion labels or one special label (NO EMOTION) by crowdsourcing (Ps = 3,048). The emotion taxonomy of the 43 emotions is systematically established by cluster analysis of Korean emotion concepts expressed on word embedding space. After explaining how KOTE is developed, we also discuss the results of finetuning and analysis for social discrimination in the corpus.

* 16 pages, 4 figures 

  Access Paper or Ask Questions

Generating artificial texts as substitution or complement of training data

Oct 25, 2021
Vincent Claveau, Antoine Chaffin, Ewa Kijak

The quality of artificially generated texts has considerably improved with the advent of transformers. The question of using these models to generate learning data for supervised learning tasks naturally arises. In this article, this question is explored under 3 aspects: (i) are artificial data an efficient complement? (ii) can they replace the original data when those are not available or cannot be distributed for confidentiality reasons? (iii) can they improve the explainability of classifiers? Different experiments are carried out on Web-related classification tasks -- namely sentiment analysis on product reviews and Fake News detection -- using artificially generated data by fine-tuned GPT-2 models. The results show that such artificial data can be used in a certain extend but require pre-processing to significantly improve performance. We show that bag-of-word approaches benefit the most from such data augmentation.

* 8 pages 

  Access Paper or Ask Questions

Evaluating Various Tokenizers for Arabic Text Classification

Jun 14, 2021
Zaid Alyafeai, Maged S. Al-shaibani, Mustafa Ghaleb, Irfan Ahmad

The first step in any NLP pipeline is learning word vector representations. However, given a large text corpus, representing all the words is not efficient. In the literature, many tokenization algorithms have emerged to tackle this problem by creating subwords which in turn limits the vocabulary size in any text corpus. However such algorithms are mostly language-agnostic and lack a proper way of capturing meaningful tokens. Not to mention the difficulty of evaluating such techniques in practice. In this paper, we introduce three new tokenization algorithms for Arabic and compare them to three other baselines using unsupervised evaluations. In addition to that, we compare all the six algorithms by evaluating them on three tasks which are sentiment analysis, news classification and poetry classification. Our experiments show that the performance of such tokenization algorithms depends on the size of the dataset, type of the task, and the amount of morphology that exists in the dataset.


  Access Paper or Ask Questions

AdCOFE: Advanced Contextual Feature Extraction in Conversations for emotion classification

Apr 09, 2021
Vaibhav Bhat, Anita Yadav, Sonal Yadav, Dhivya Chandrasekran, Vijay Mago

Emotion recognition in conversations is an important step in various virtual chat bots which require opinion-based feedback, like in social media threads, online support and many more applications. Current Emotion recognition in conversations models face issues like (a) loss of contextual information in between two dialogues of a conversation, (b) failure to give appropriate importance to significant tokens in each utterance and (c) inability to pass on the emotional information from previous utterances.The proposed model of Advanced Contextual Feature Extraction (AdCOFE) addresses these issues by performing unique feature extraction using knowledge graphs, sentiment lexicons and phrases of natural language at all levels (word and position embedding) of the utterances. Experiments on the Emotion recognition in conversations dataset show that AdCOFE is beneficial in capturing emotions in conversations.

* 12 pages, to be published in PeerJ Computer Science Journal 

  Access Paper or Ask Questions

CoCon: A Self-Supervised Approach for Controlled Text Generation

Jun 05, 2020
Alvin Chan, Yew-Soon Ong, Bill Pung, Aston Zhang, Jie Fu

Pretrained Transformer-based language models (LMs) display remarkable natural language generation capabilities. With their immense potential, controlling text generation of such LMs is getting attention. While there are studies that seek to control high-level attributes (such as sentiment and topic) of generated text, there is still a lack of more precise control over its content at the word- and phrase-level. Here, we propose Content-Conditioner (CoCon) to control an LM's output text with a target content, at a fine-grained level. In our self-supervised approach, the CoCon block learns to help the LM complete a partially-observed text sequence by conditioning with content inputs that are withheld from the LM. Through experiments, we show that CoCon can naturally incorporate target content into generated texts and control high-level text attributes in a zero-shot manner.


  Access Paper or Ask Questions

Fine-grained Financial Opinion Mining: A Survey and Research Agenda

May 20, 2020
Chung-Chi Chen, Hen-Hsen Huang, Hsin-Hsi Chen

Opinion mining is a prevalent research issue in many domains. In the financial domain, however, it is still in the early stages. Most of the researches on this topic only focus on the coarse-grained market sentiment analysis, i.e., 2-way classification for bullish/bearish. Thanks to the recent financial technology (FinTech) development, some interdisciplinary researchers start to involve in the in-depth analysis of investors' opinions. In this position paper, we first define the financial opinions from both coarse-grained and fine-grained points of views, and then provide an overview on the issues already tackled. In addition to listing research issues of the existing topics, we further propose a road map of fine-grained financial opinion mining for future researches, and point out several challenges yet to explore. Moreover, we provide possible directions to deal with the proposed research issues.


  Access Paper or Ask Questions

<<
168
169
170
171
172
173
174
175
176
177
178
179
180
>>