Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Sentiment": models, code, and papers

Unsupervised Discovery of Gendered Language through Latent-Variable Modeling

Jun 11, 2019
Alexander Hoyle, Wolf-Sonkin, Hanna Wallach, Isabelle Augenstein, Ryan Cotterell

Studying the ways in which language is gendered has long been an area of interest in sociolinguistics. Studies have explored, for example, the speech of male and female characters in film and the language used to describe male and female politicians. In this paper, we aim not to merely study this phenomenon qualitatively, but instead to quantify the degree to which the language used to describe men and women is different and, moreover, different in a positive or negative way. To that end, we introduce a generative latent-variable model that jointly represents adjective (or verb) choice, with its sentiment, given the natural gender of a head (or dependent) noun. We find that there are significant differences between descriptions of male and female nouns and that these differences align with common gender stereotypes: Positive adjectives used to describe women are more often related to their bodies than adjectives used to describe men.

* To appear in ACL 2019 

  Access Paper or Ask Questions

Emergence of Grounded Compositional Language in Multi-Agent Populations

Jul 24, 2018
Igor Mordatch, Pieter Abbeel

By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.


  Access Paper or Ask Questions

Anaphora and Coreference Resolution: A Review

May 30, 2018
Rhea Sukthanker, Soujanya Poria, Erik Cambria, Ramkumar Thirunavukarasu

Entity resolution aims at resolving repeated references to an entity in a document and forms a core component of natural language processing (NLP) research. This field possesses immense potential to improve the performance of other NLP fields like machine translation, sentiment analysis, paraphrase detection, summarization, etc. The area of entity resolution in NLP has seen proliferation of research in two separate sub-areas namely: anaphora resolution and coreference resolution. Through this review article, we aim at clarifying the scope of these two tasks in entity resolution. We also carry out a detailed analysis of the datasets, evaluation metrics and research methods that have been adopted to tackle this NLP problem. This survey is motivated with the aim of providing the reader with a clear understanding of what constitutes this NLP problem and the issues that require attention.


  Access Paper or Ask Questions

Tweet2Vec: Learning Tweet Embeddings Using Character-level CNN-LSTM Encoder-Decoder

Jul 26, 2016
Soroush Vosoughi, Prashanth Vijayaraghavan, Deb Roy

We present Tweet2Vec, a novel method for generating general-purpose vector representation of tweets. The model learns tweet embeddings using character-level CNN-LSTM encoder-decoder. We trained our model on 3 million, randomly selected English-language tweets. The model was evaluated using two methods: tweet semantic similarity and tweet sentiment categorization, outperforming the previous state-of-the-art in both tasks. The evaluations demonstrate the power of the tweet embeddings generated by our model for various tweet categorization tasks. The vector representations generated by our model are generic, and hence can be applied to a variety of tasks. Though the model presented in this paper is trained on English-language tweets, the method presented can be used to learn tweet embeddings for different languages.

* SIGIR 2016, July 17-21, 2016, Pisa. Proceedings of SIGIR 2016. Pisa, Italy (2016) 

  Access Paper or Ask Questions

A Quantitative Neural Coding Model of Sensory Memory

Jun 25, 2014
Peilei Liu, Ting Wang

The coding mechanism of sensory memory on the neuron scale is one of the most important questions in neuroscience. We have put forward a quantitative neural network model, which is self organized, self similar, and self adaptive, just like an ecosystem following Darwin theory. According to this model, neural coding is a mult to one mapping from objects to neurons. And the whole cerebrum is a real-time statistical Turing Machine, with powerful representing and learning ability. This model can reconcile some important disputations, such as: temporal coding versus rate based coding, grandmother cell versus population coding, and decay theory versus interference theory. And it has also provided explanations for some key questions such as memory consolidation, episodic memory, consciousness, and sentiment. Philosophical significance is indicated at last.

* 9 pages, 3 figures 

  Access Paper or Ask Questions

Not All Negatives are Equal: Label-Aware Contrastive Loss for Fine-grained Text Classification

Sep 12, 2021
Varsha Suresh, Desmond C. Ong

Fine-grained classification involves dealing with datasets with larger number of classes with subtle differences between them. Guiding the model to focus on differentiating dimensions between these commonly confusable classes is key to improving performance on fine-grained tasks. In this work, we analyse the contrastive fine-tuning of pre-trained language models on two fine-grained text classification tasks, emotion classification and sentiment analysis. We adaptively embed class relationships into a contrastive objective function to help differently weigh the positives and negatives, and in particular, weighting closely confusable negatives more than less similar negative examples. We find that Label-aware Contrastive Loss outperforms previous contrastive methods, in the presence of larger number and/or more confusable classes, and helps models to produce output distributions that are more differentiated.

* Accepted at EMNLP 2021 

  Access Paper or Ask Questions

Not Quite 'Ask a Librarian': AI on the Nature, Value, and Future of LIS

Jul 07, 2021
Jesse David Dinneen, Helen Bubinger

AI language models trained on Web data generate prose that reflects human knowledge and public sentiments, but can also contain novel insights and predictions. We asked the world's best language model, GPT-3, fifteen difficult questions about the nature, value, and future of library and information science (LIS), topics that receive perennial attention from LIS scholars. We present highlights from its 45 different responses, which range from platitudes and caricatures to interesting perspectives and worrisome visions of the future, thus providing an LIS-tailored demonstration of the current performance of AI language models. We also reflect on the viability of using AI to forecast or generate research ideas in this way today. Finally, we have shared the full response log online for readers to consider and evaluate for themselves.

* Final version to appear in ASIS&T '21: Proceedings of the 84th Annual Meeting of the Association for Information Science & Technology, 58 

  Access Paper or Ask Questions

Metaphor Detection using Deep Contextualized Word Embeddings

Sep 26, 2020
Shashwat Aggarwal, Ramesh Singh

Metaphors are ubiquitous in natural language, and their detection plays an essential role in many natural language processing tasks, such as language understanding, sentiment analysis, etc. Most existing approaches for metaphor detection rely on complex, hand-crafted and fine-tuned feature pipelines, which greatly limit their applicability. In this work, we present an end-to-end method composed of deep contextualized word embeddings, bidirectional LSTMs and multi-head attention mechanism to address the task of automatic metaphor detection. Our method, unlike many other existing approaches, requires only the raw text sequences as input features to detect the metaphoricity of a phrase. We compare the performance of our method against the existing baselines on two benchmark datasets, TroFi, and MOH-X respectively. Experimental evaluations confirm the effectiveness of our approach.


  Access Paper or Ask Questions

FinBERT: A Pretrained Language Model for Financial Communications

Jul 09, 2020
Yi Yang, Mark Christopher Siy UY, Allen Huang

Contextual pretrained language models, such as BERT (Devlin et al., 2019), have made significant breakthrough in various NLP tasks by training on large scale of unlabeled text re-sources.Financial sector also accumulates large amount of financial communication text.However, there is no pretrained finance specific language models available. In this work,we address the need by pretraining a financial domain specific BERT models, FinBERT, using a large scale of financial communication corpora. Experiments on three financial sentiment classification tasks confirm the advantage of FinBERT over generic domain BERT model. The code and pretrained models are available at https://github.com/yya518/FinBERT. We hope this will be useful for practitioners and researchers working on financial NLP tasks.

* https://github.com/yya518/FinBERT 

  Access Paper or Ask Questions

<<
165
166
167
168
169
170
171
172
173
174
175
176
177
>>