Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Sentiment": models, code, and papers

Learning Word Representations with Hierarchical Sparse Coding

Nov 06, 2014
Dani Yogatama, Manaal Faruqui, Chris Dyer, Noah A. Smith

We propose a new method for learning word representations using hierarchical regularization in sparse coding inspired by the linguistic study of word meanings. We show an efficient learning algorithm based on stochastic proximal methods that is significantly faster than previous approaches, making it possible to perform hierarchical sparse coding on a corpus of billions of word tokens. Experiments on various benchmark tasks---word similarity ranking, analogies, sentence completion, and sentiment analysis---demonstrate that the method outperforms or is competitive with state-of-the-art methods. Our word representations are available at \url{http://www.ark.cs.cmu.edu/dyogatam/wordvecs/}.


  Access Paper or Ask Questions

Cost-effective Deployment of BERT Models in Serverless Environment

Mar 19, 2021
Katarína Benešová, Andrej Švec, Marek Šuppa

In this study we demonstrate the viability of deploying BERT-style models to AWS Lambda in a production environment. Since the freely available pre-trained models are too large to be deployed in this way, we utilize knowledge distillation and fine-tune the models on proprietary datasets for two real-world tasks: sentiment analysis and semantic textual similarity. As a result, we obtain models that are tuned for a specific domain and deployable in the serverless environment. The subsequent performance analysis shows that this solution does not only report latency levels acceptable for production use but that it is also a cost-effective alternative to small-to-medium size deployments of BERT models, all without any infrastructure overhead.


  Access Paper or Ask Questions

Embarrassingly Simple Unsupervised Aspect Extraction

Apr 28, 2020
Stéphan Tulkens, Andreas van Cranenburgh

We present a simple but effective method for aspect identification in sentiment analysis. Our unsupervised method only requires word embeddings and a POS tagger, and is therefore straightforward to apply to new domains and languages. We introduce Contrastive Attention (CAt), a novel single-head attention mechanism based on an RBF kernel, which gives a considerable boost in performance and makes the model interpretable. Previous work relied on syntactic features and complex neural models. We show that given the simplicity of current benchmark datasets for aspect extraction, such complex models are not needed. The code to reproduce the experiments reported in this paper is available at https://github.com/clips/cat

* Accepted as ACL 2020 short paper 

  Access Paper or Ask Questions

Aspect and Opinion Terms Extraction Using Double Embeddings and Attention Mechanism for Indonesian Hotel Reviews

Aug 19, 2019
Jordhy Fernando, Masayu Leylia Khodra, Ali Akbar Septiandri

Aspect and opinion terms extraction from review texts is one of the key tasks in aspect-based sentiment analysis. In order to extract aspect and opinion terms for Indonesian hotel reviews, we adapt double embeddings feature and attention mechanism that outperform the best system at SemEval 2015 and 2016. We conduct experiments using 4000 reviews to find the best configuration and show the influences of double embeddings and attention mechanism toward model performance. Using 1000 reviews for evaluation, we achieved F1-measure of 0.914 and 0.90 for aspect and opinion terms extraction in token and entity (term) level respectively.


  Access Paper or Ask Questions

Incorporating Structured Commonsense Knowledge in Story Completion

Nov 01, 2018
Jiaao Chen, Jianshu Chen, Zhou Yu

The ability to select an appropriate story ending is the first step towards perfect narrative comprehension. Story ending prediction requires not only the explicit clues within the context, but also the implicit knowledge (such as commonsense) to construct a reasonable and consistent story. However, most previous approaches do not explicitly use background commonsense knowledge. We present a neural story ending selection model that integrates three types of information: narrative sequence, sentiment evolution and commonsense knowledge. Experiments show that our model outperforms state-of-the-art approaches on a public dataset, ROCStory Cloze Task , and the performance gain from adding the additional commonsense knowledge is significant.


  Access Paper or Ask Questions

Learning to select data for transfer learning with Bayesian Optimization

Jul 17, 2017
Sebastian Ruder, Barbara Plank

Domain similarity measures can be used to gauge adaptability and select suitable data for transfer learning, but existing approaches define ad hoc measures that are deemed suitable for respective tasks. Inspired by work on curriculum learning, we propose to \emph{learn} data selection measures using Bayesian Optimization and evaluate them across models, domains and tasks. Our learned measures outperform existing domain similarity measures significantly on three tasks: sentiment analysis, part-of-speech tagging, and parsing. We show the importance of complementing similarity with diversity, and that learned measures are -- to some degree -- transferable across models, domains, and even tasks.

* EMNLP 2017. Code available at: https://github.com/sebastianruder/learn-to-select-data 

  Access Paper or Ask Questions

Lithium NLP: A System for Rich Information Extraction from Noisy User Generated Text on Social Media

Jul 13, 2017
Preeti Bhargava, Nemanja Spasojevic, Guoning Hu

In this paper, we describe the Lithium Natural Language Processing (NLP) system - a resource-constrained, high- throughput and language-agnostic system for information extraction from noisy user generated text on social media. Lithium NLP extracts a rich set of information including entities, topics, hashtags and sentiment from text. We discuss several real world applications of the system currently incorporated in Lithium products. We also compare our system with existing commercial and academic NLP systems in terms of performance, information extracted and languages supported. We show that Lithium NLP is at par with and in some cases, outperforms state- of-the-art commercial NLP systems.

* 9 pages, 6 figures, 2 tables, EMNLP 2017 Workshop on Noisy User Generated Text WNUT 2017 

  Access Paper or Ask Questions

The Expressive Power of Word Embeddings

May 29, 2013
Yanqing Chen, Bryan Perozzi, Rami Al-Rfou, Steven Skiena

We seek to better understand the difference in quality of the several publicly released embeddings. We propose several tasks that help to distinguish the characteristics of different embeddings. Our evaluation of sentiment polarity and synonym/antonym relations shows that embeddings are able to capture surprisingly nuanced semantics even in the absence of sentence structure. Moreover, benchmarking the embeddings shows great variance in quality and characteristics of the semantics captured by the tested embeddings. Finally, we show the impact of varying the number of dimensions and the resolution of each dimension on the effective useful features captured by the embedding space. Our contributions highlight the importance of embeddings for NLP tasks and the effect of their quality on the final results.

* submitted to ICML 2013, Deep Learning for Audio, Speech and Language Processing Workshop. 8 pages, 8 figures 

  Access Paper or Ask Questions

indic-punct: An automatic punctuation restoration and inverse text normalization framework for Indic languages

Mar 31, 2022
Anirudh Gupta, Neeraj Chhimwal, Ankur Dhuriya, Rishabh Gaur, Priyanshi Shah, Harveen Singh Chadha, Vivek Raghavan

Automatic Speech Recognition (ASR) generates text which is most of the times devoid of any punctuation. Absence of punctuation is text can affect readability. Also, down stream NLP tasks such as sentiment analysis, machine translation, greatly benefit by having punctuation and sentence boundary information. We present an approach for automatic punctuation of text using a pretrained IndicBERT model. Inverse text normalization is done by hand writing weighted finite state transducer (WFST) grammars. We have developed this tool for 11 Indic languages namely Hindi, Tamil, Telugu, Kannada, Gujarati, Marathi, Odia, Bengali, Assamese, Malayalam and Punjabi. All code and data is publicly. available

* Submitted to InterSpeech 2022. arXiv admin note: text overlap with arXiv:2104.05055 by other authors 

  Access Paper or Ask Questions

Joint Learning for Aspect and Polarity Classification in Persian Reviews Using Multi-Task Deep Learning

Jan 17, 2022
Milad Vazan

The purpose of this paper focuses on two sub-tasks related to aspect-based sentiment analysis, namely, aspect category detection (ACD) and aspect category polarity (ACP) in the Persian language. Most of the previous methods only focus on solving one of these sub-tasks separately. In this paper, we propose a multi-task learning model based on deep neural networks, which can concurrently detect aspect category and detect aspect category polarity. We evaluated the proposed method using a Persian language dataset in the movie domain on different deep learning-based models. Final experiments show that the CNN model has better results than other models.

* 16 pages, 6 figurs 

  Access Paper or Ask Questions

<<
152
153
154
155
156
157
158
159
160
161
162
163
164
>>