Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Sentiment": models, code, and papers

Compositional Distributional Semantics with Long Short Term Memory

Apr 17, 2015
Phong Le, Willem Zuidema

We are proposing an extension of the recursive neural network that makes use of a variant of the long short-term memory architecture. The extension allows information low in parse trees to be stored in a memory register (the `memory cell') and used much later higher up in the parse tree. This provides a solution to the vanishing gradient problem and allows the network to capture long range dependencies. Experimental results show that our composition outperformed the traditional neural-network composition on the Stanford Sentiment Treebank.

* 10 pages, 7 figures 

  Access Paper or Ask Questions

Distilling BERT for low complexity network training

May 13, 2021
Bansidhar Mangalwedhekar

This paper studies the efficiency of transferring BERT learnings to low complexity models like BiLSTM, BiLSTM with attention and shallow CNNs using sentiment analysis on SST-2 dataset. It also compares the complexity of inference of the BERT model with these lower complexity models and underlines the importance of these techniques in enabling high performance NLP models on edge devices like mobiles, tablets and MCU development boards like Raspberry Pi etc. and enabling exciting new applications.

  Access Paper or Ask Questions

Iterative Recursive Attention Model for Interpretable Sequence Classification

Aug 30, 2018
Martin Tutek, Jan Šnajder

Natural language processing has greatly benefited from the introduction of the attention mechanism. However, standard attention models are of limited interpretability for tasks that involve a series of inference steps. We describe an iterative recursive attention model, which constructs incremental representations of input data through reusing results of previously computed queries. We train our model on sentiment classification datasets and demonstrate its capacity to identify and combine different aspects of the input in an easily interpretable manner, while obtaining performance close to the state of the art.

* 7 pages, 5 figures, Analyzing and interpreting neural networks for NLP Workshop at EMNLP 2018 

  Access Paper or Ask Questions

An Emotion-controlled Dialog Response Generation Model with Dynamic Vocabulary

Mar 04, 2021
Shuangyong Song, Kexin Wang, Chao Wang, Haiqing Chen, Huan Chen

In response generation task, proper sentimental expressions can obviously improve the human-like level of the responses. However, for real application in online systems, high QPS (queries per second, an indicator of the flow capacity of on-line systems) is required, and a dynamic vocabulary mechanism has been proved available in improving speed of generative models. In this paper, we proposed an emotion-controlled dialog response generation model based on the dynamic vocabulary mechanism, and the experimental results show the benefit of this model.

* 4 pages 

  Access Paper or Ask Questions

A Report on the 2020 Sarcasm Detection Shared Task

Jun 04, 2020
Debanjan Ghosh, Avijit Vajpayee, Smaranda Muresan

Detecting sarcasm and verbal irony is critical for understanding people's actual sentiments and beliefs. Thus, the field of sarcasm analysis has become a popular research problem in natural language processing. As the community working on computational approaches for sarcasm detection is growing, it is imperative to conduct benchmarking studies to analyze the current state-of-the-art, facilitating progress in this area. We report on the shared task on sarcasm detection we conducted as a part of the 2nd Workshop on Figurative Language Processing (FigLang 2020) at ACL 2020.

* 2nd Workshop on Figurative Language Processing (FigLang2020) at ACL 2020 

  Access Paper or Ask Questions

Multi-Source Domain Adaptation with Mixture of Experts

Oct 16, 2018
Jiang Guo, Darsh J Shah, Regina Barzilay

We propose a mixture-of-experts approach for unsupervised domain adaptation from multiple sources. The key idea is to explicitly capture the relationship between a target example and different source domains. This relationship, expressed by a point-to-set metric, determines how to combine predictors trained on various domains. The metric is learned in an unsupervised fashion using meta-training. Experimental results on sentiment analysis and part-of-speech tagging demonstrate that our approach consistently outperforms multiple baselines and can robustly handle negative transfer.

* 11 pages, EMNLP 2018 

  Access Paper or Ask Questions

Learning Robust Representations of Text

Sep 20, 2016
Yitong Li, Trevor Cohn, Timothy Baldwin

Deep neural networks have achieved remarkable results across many language processing tasks, however these methods are highly sensitive to noise and adversarial attacks. We present a regularization based method for limiting network sensitivity to its inputs, inspired by ideas from computer vision, thus learning models that are more robust. Empirical evaluation over a range of sentiment datasets with a convolutional neural network shows that, compared to a baseline model and the dropout method, our method achieves superior performance over noisy inputs and out-of-domain data.

* 5 pages with 2 pages reference, 2 tables, 1 figure 

  Access Paper or Ask Questions

Twitter Dataset on the Russo-Ukrainian War

Apr 07, 2022
Alexander Shevtsov, Christos Tzagkarakis, Despoina Antonakaki, Polyvios Pratikakis, Sotiris Ioannidis

On 24 February 2022, Russia invaded Ukraine, also known now as Russo-Ukrainian War. We have initiated an ongoing dataset acquisition from Twitter API. Until the day this paper was written the dataset has reached the amount of 57.3 million tweets, originating from 7.7 million users. We apply an initial volume and sentiment analysis, while the dataset can be used to further exploratory investigation towards topic analysis, hate speech, propaganda recognition, or even show potential malicious entities like botnets.

  Access Paper or Ask Questions

Towards Olfactory Information Extraction from Text: A Case Study on Detecting Smell Experiences in Novels

Dec 06, 2020
Ryan Brate, Paul Groth, Marieke van Erp

Environmental factors determine the smells we perceive, but societal factors factors shape the importance, sentiment and biases we give to them. Descriptions of smells in text, or as we call them `smell experiences', offer a window into these factors, but they must first be identified. To the best of our knowledge, no tool exists to extract references to smell experiences from text. In this paper, we present two variations on a semi-supervised approach to identify smell experiences in English literature. The combined set of patterns from both implementations offer significantly better performance than a keyword-based baseline.

* Accepted to The 4th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature (LaTeCH-CLfL 2020). Barcelona, Spain. December 2020./ 

  Access Paper or Ask Questions