Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Sentiment": models, code, and papers

Quantifying Uncertainties in Natural Language Processing Tasks

Nov 18, 2018
Yijun Xiao, William Yang Wang

Reliable uncertainty quantification is a first step towards building explainable, transparent, and accountable artificial intelligent systems. Recent progress in Bayesian deep learning has made such quantification realizable. In this paper, we propose novel methods to study the benefits of characterizing model and data uncertainties for natural language processing (NLP) tasks. With empirical experiments on sentiment analysis, named entity recognition, and language modeling using convolutional and recurrent neural network models, we show that explicitly modeling uncertainties is not only necessary to measure output confidence levels, but also useful at enhancing model performances in various NLP tasks.

* To appear at AAAI 2019 

  Access Paper or Ask Questions

Bayesian Sparsification of Recurrent Neural Networks

Jul 31, 2017
Ekaterina Lobacheva, Nadezhda Chirkova, Dmitry Vetrov

Recurrent neural networks show state-of-the-art results in many text analysis tasks but often require a lot of memory to store their weights. Recently proposed Sparse Variational Dropout eliminates the majority of the weights in a feed-forward neural network without significant loss of quality. We apply this technique to sparsify recurrent neural networks. To account for recurrent specifics we also rely on Binary Variational Dropout for RNN. We report 99.5% sparsity level on sentiment analysis task without a quality drop and up to 87% sparsity level on language modeling task with slight loss of accuracy.

* Published in Workshop on Learning to Generate Natural Language, ICML, 2017 

  Access Paper or Ask Questions

Toward Automatic Understanding of the Function of Affective Language in Support Groups

Oct 06, 2016
Amit Navindgi, Caroline Brun, Cécile Boulard Masson, Scott Nowson

Understanding expressions of emotions in support forums has considerable value and NLP methods are key to automating this. Many approaches understandably use subjective categories which are more fine-grained than a straightforward polarity-based spectrum. However, the definition of such categories is non-trivial and, in fact, we argue for a need to incorporate communicative elements even beyond subjectivity. To support our position, we report experiments on a sentiment-labelled corpus of posts taken from a medical support forum. We argue that not only is a more fine-grained approach to text analysis important, but simultaneously recognising the social function behind affective expressions enable a more accurate and valuable level of understanding.

* 9 pages, 1 figure, conference workshop 

  Access Paper or Ask Questions

Headline Diagnosis: Manipulation of Content Farm Headlines

Apr 25, 2022
Yu-Chieh Chen, Pei-Yu Huang, Chun Lin, Yi-Ting Huang, Meng Chang Chen

As technology grows faster, the news spreads through social media. In order to attract more readers and acquire additional profit, some news agencies reproduce massive news in a more appealing manner. Therefore, it is essential to accurately predict whether a news article is from official news agencies. This work develops a headline classification based on Convoluted Neural Network to determine credibility of a news article. The model primarily focuses on investigating key factors from headlines. These factors include word segmentation, part-of-speech tags, and sentiment features. With integrating these features into the proposed classification model, the demonstrated evaluation achieves 93.99% for accuracy.

* Taiwan Academic Network Conference 26(2020) 680-684 
* Accepted by The 26th Taiwan Academic Network Conference (TANET) 2020 

  Access Paper or Ask Questions

Towards Detection of Subjective Bias using Contextualized Word Embeddings

Feb 16, 2020
Tanvi Dadu, Kartikey Pant, Radhika Mamidi

Subjective bias detection is critical for applications like propaganda detection, content recommendation, sentiment analysis, and bias neutralization. This bias is introduced in natural language via inflammatory words and phrases, casting doubt over facts, and presupposing the truth. In this work, we perform comprehensive experiments for detecting subjective bias using BERT-based models on the Wiki Neutrality Corpus(WNC). The dataset consists of $360k$ labeled instances, from Wikipedia edits that remove various instances of the bias. We further propose BERT-based ensembles that outperform state-of-the-art methods like $BERT_{large}$ by a margin of $5.6$ F1 score.

* To appear in Companion Proceedings of the Web Conference 2020 (WWW '20 Companion) 

  Access Paper or Ask Questions

Formality Style Transfer with Hybrid Textual Annotations

Mar 15, 2019
Ruochen Xu, Tao Ge, Furu Wei

Formality style transformation is the task of modifying the formality of a given sentence without changing its content. Its challenge is the lack of large-scale sentence-aligned parallel data. In this paper, we propose an omnivorous model that takes parallel data and formality-classified data jointly to alleviate the data sparsity issue. We empirically demonstrate the effectiveness of our approach by achieving the state-of-art performance on a recently proposed benchmark dataset of formality transfer. Furthermore, our model can be readily adapted to other unsupervised text style transfer tasks like unsupervised sentiment transfer and achieve competitive results on three widely recognized benchmarks.


  Access Paper or Ask Questions

Harnessing Deep Neural Networks with Logic Rules

Nov 15, 2016
Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, Eric Xing

Combining deep neural networks with structured logic rules is desirable to harness flexibility and reduce uninterpretability of the neural models. We propose a general framework capable of enhancing various types of neural networks (e.g., CNNs and RNNs) with declarative first-order logic rules. Specifically, we develop an iterative distillation method that transfers the structured information of logic rules into the weights of neural networks. We deploy the framework on a CNN for sentiment analysis, and an RNN for named entity recognition. With a few highly intuitive rules, we obtain substantial improvements and achieve state-of-the-art or comparable results to previous best-performing systems.

* Fix typos and experiment setting 

  Access Paper or Ask Questions

Learning Word Representations with Hierarchical Sparse Coding

Nov 06, 2014
Dani Yogatama, Manaal Faruqui, Chris Dyer, Noah A. Smith

We propose a new method for learning word representations using hierarchical regularization in sparse coding inspired by the linguistic study of word meanings. We show an efficient learning algorithm based on stochastic proximal methods that is significantly faster than previous approaches, making it possible to perform hierarchical sparse coding on a corpus of billions of word tokens. Experiments on various benchmark tasks---word similarity ranking, analogies, sentence completion, and sentiment analysis---demonstrate that the method outperforms or is competitive with state-of-the-art methods. Our word representations are available at \url{http://www.ark.cs.cmu.edu/dyogatam/wordvecs/}.


  Access Paper or Ask Questions

Cost-effective Deployment of BERT Models in Serverless Environment

Mar 19, 2021
Katarína Benešová, Andrej Švec, Marek Šuppa

In this study we demonstrate the viability of deploying BERT-style models to AWS Lambda in a production environment. Since the freely available pre-trained models are too large to be deployed in this way, we utilize knowledge distillation and fine-tune the models on proprietary datasets for two real-world tasks: sentiment analysis and semantic textual similarity. As a result, we obtain models that are tuned for a specific domain and deployable in the serverless environment. The subsequent performance analysis shows that this solution does not only report latency levels acceptable for production use but that it is also a cost-effective alternative to small-to-medium size deployments of BERT models, all without any infrastructure overhead.


  Access Paper or Ask Questions

Embarrassingly Simple Unsupervised Aspect Extraction

Apr 28, 2020
Stéphan Tulkens, Andreas van Cranenburgh

We present a simple but effective method for aspect identification in sentiment analysis. Our unsupervised method only requires word embeddings and a POS tagger, and is therefore straightforward to apply to new domains and languages. We introduce Contrastive Attention (CAt), a novel single-head attention mechanism based on an RBF kernel, which gives a considerable boost in performance and makes the model interpretable. Previous work relied on syntactic features and complex neural models. We show that given the simplicity of current benchmark datasets for aspect extraction, such complex models are not needed. The code to reproduce the experiments reported in this paper is available at https://github.com/clips/cat

* Accepted as ACL 2020 short paper 

  Access Paper or Ask Questions

<<
150
151
152
153
154
155
156
157
158
159
160
161
162
>>