Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Sentiment": models, code, and papers

Empowering Language Understanding with Counterfactual Reasoning

Jun 06, 2021
Fuli Feng, Jizhi Zhang, Xiangnan He, Hanwang Zhang, Tat-Seng Chua

Present language understanding methods have demonstrated extraordinary ability of recognizing patterns in texts via machine learning. However, existing methods indiscriminately use the recognized patterns in the testing phase that is inherently different from us humans who have counterfactual thinking, e.g., to scrutinize for the hard testing samples. Inspired by this, we propose a Counterfactual Reasoning Model, which mimics the counterfactual thinking by learning from few counterfactual samples. In particular, we devise a generation module to generate representative counterfactual samples for each factual sample, and a retrospective module to retrospect the model prediction by comparing the counterfactual and factual samples. Extensive experiments on sentiment analysis (SA) and natural language inference (NLI) validate the effectiveness of our method.

* Accepted by Findings of ACL'21 

  Access Paper or Ask Questions

Improving Multimodal Accuracy Through Modality Pre-training and Attention

Nov 11, 2020
Aya Abdelsalam Ismail, Mahmudul Hasan, Faisal Ishtiaq

Training a multimodal network is challenging and it requires complex architectures to achieve reasonable performance. We show that one reason for this phenomena is the difference between the convergence rate of various modalities. We address this by pre-training modality-specific sub-networks in multimodal architectures independently before end-to-end training of the entire network. Furthermore, we show that the addition of an attention mechanism between sub-networks after pre-training helps identify the most important modality during ambiguous scenarios boosting the performance. We demonstrate that by performing these two tricks a simple network can achieve similar performance to a complicated architecture that is significantly more expensive to train on multiple tasks including sentiment analysis, emotion recognition, and speaker trait recognition.


  Access Paper or Ask Questions

Convolutional Feature Extraction and Neural Arithmetic Logic Units for Stock Prediction

May 18, 2019
Shangeth Rajaa, Jajati Keshari Sahoo

Stock prediction is a topic undergoing intense study for many years. Finance experts and mathematicians have been working on a way to predict the future stock price so as to decide to buy the stock or sell it to make profit. Stock experts or economists, usually analyze on the previous stock values using technical indicators, sentiment analysis etc to predict the future stock price. In recent years, many researches have extensively used machine learning for predicting the stock behaviour. In this paper we propose data driven deep learning approach to predict the future stock value with the previous price with the feature extraction property of convolutional neural network and to use Neural Arithmetic Logic Units with it.

* Accepted at ICACDS 2019 - Springer CCIS 

  Access Paper or Ask Questions

Find a Reasonable Ending for Stories: Does Logic Relation Help the Story Cloze Test?

Dec 13, 2018
Mingyue Shang, Zhenxin Fu, Hongzhi Yin, Bo Tang, Dongyan Zhao, Rui Yan

Natural language understanding is a challenging problem that covers a wide range of tasks. While previous methods generally train each task separately, we consider combining the cross-task features to enhance the task performance. In this paper, we incorporate the logic information with the help of the Natural Language Inference (NLI) task to the Story Cloze Test (SCT). Previous work on SCT considered various semantic information, such as sentiment and topic, but lack the logic information between sentences which is an essential element of stories. Thus we propose to extract the logic information during the course of the story to improve the understanding of the whole story. The logic information is modeled with the help of the NLI task. Experimental results prove the strength of the logic information.

* Student Abstract in AAAI-2019 

  Access Paper or Ask Questions

YouTube AV 50K: An Annotated Corpus for Comments in Autonomous Vehicles

Oct 15, 2018
Tao Li, Lei Lin, Minsoo Choi, Kaiming Fu, Siyuan Gong, Jian Wang

With one billion monthly viewers, and millions of users discussing and sharing opinions, comments below YouTube videos are rich sources of data for opinion mining and sentiment analysis. We introduce the YouTube AV 50K dataset, a freely-available collections of more than 50,000 YouTube comments and metadata below autonomous vehicle (AV)-related videos. We describe its creation process, its content and data format, and discuss its possible usages. Especially, we do a case study of the first self-driving car fatality to evaluate the dataset, and show how we can use this dataset to better understand public attitudes toward self-driving cars and public reactions to the accident. Future developments of the dataset are also discussed.

* in Proceedings of the Thirteenth International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP 2018) 

  Access Paper or Ask Questions

Universal Dependencies Parsing for Colloquial Singaporean English

May 18, 2017
Hongmin Wang, Yue Zhang, GuangYong Leonard Chan, Jie Yang, Hai Leong Chieu

Singlish can be interesting to the ACL community both linguistically as a major creole based on English, and computationally for information extraction and sentiment analysis of regional social media. We investigate dependency parsing of Singlish by constructing a dependency treebank under the Universal Dependencies scheme, and then training a neural network model by integrating English syntactic knowledge into a state-of-the-art parser trained on the Singlish treebank. Results show that English knowledge can lead to 25% relative error reduction, resulting in a parser of 84.47% accuracies. To the best of our knowledge, we are the first to use neural stacking to improve cross-lingual dependency parsing on low-resource languages. We make both our annotation and parser available for further research.

* Accepted by ACL 2017 

  Access Paper or Ask Questions

Automatic Rule Extraction from Long Short Term Memory Networks

Feb 24, 2017
W. James Murdoch, Arthur Szlam

Although deep learning models have proven effective at solving problems in natural language processing, the mechanism by which they come to their conclusions is often unclear. As a result, these models are generally treated as black boxes, yielding no insight of the underlying learned patterns. In this paper we consider Long Short Term Memory networks (LSTMs) and demonstrate a new approach for tracking the importance of a given input to the LSTM for a given output. By identifying consistently important patterns of words, we are able to distill state of the art LSTMs on sentiment analysis and question answering into a set of representative phrases. This representation is then quantitatively validated by using the extracted phrases to construct a simple, rule-based classifier which approximates the output of the LSTM.

* ICLR 2017 accepted paper 

  Access Paper or Ask Questions

Automatic evaluation of scientific abstracts through natural language processing

Nov 14, 2021
Lucas G. O. Lopes, Thales M. A. Vieira, William W. M. Lira

This work presents a framework to classify and evaluate distinct research abstract texts which are focused on the description of processes and their applications. In this context, this paper proposes natural language processing algorithms to classify, segment and evaluate the results of scientific work. Initially, the proposed framework categorize the abstract texts into according to the problems intended to be solved by employing a text classification approach. Then, the abstract text is segmented into problem description, methodology and results. Finally, the methodology of the abstract is ranked based on the sentiment analysis of its results. The proposed framework allows us to quickly rank the best methods to solve specific problems. To validate the proposed framework, oil production anomaly abstracts were experimented and achieved promising results.


  Access Paper or Ask Questions

Does Commonsense help in detecting Sarcasm?

Sep 17, 2021
Somnath Basu Roy Chowdhury, Snigdha Chaturvedi

Sarcasm detection is important for several NLP tasks such as sentiment identification in product reviews, user feedback, and online forums. It is a challenging task requiring a deep understanding of language, context, and world knowledge. In this paper, we investigate whether incorporating commonsense knowledge helps in sarcasm detection. For this, we incorporate commonsense knowledge into the prediction process using a graph convolution network with pre-trained language model embeddings as input. Our experiments with three sarcasm detection datasets indicate that the approach does not outperform the baseline model. We perform an exhaustive set of experiments to analyze where commonsense support adds value and where it hurts classification. Our implementation is publicly available at: https://github.com/brcsomnath/commonsense-sarcasm.

* Accepted at Insights from Negative Results in NLP Workshop, EMNLP 2021 

  Access Paper or Ask Questions

Improving Formality Style Transfer with Context-Aware Rule Injection

Jun 01, 2021
Zonghai Yao, Hong Yu

Models pre-trained on large-scale regular text corpora often do not work well for user-generated data where the language styles differ significantly from the mainstream text. Here we present Context-Aware Rule Injection (CARI), an innovative method for formality style transfer (FST). CARI injects multiple rules into an end-to-end BERT-based encoder and decoder model. It learns to select optimal rules based on context. The intrinsic evaluation showed that CARI achieved the new highest performance on the FST benchmark dataset. Our extrinsic evaluation showed that CARI can greatly improve the regular pre-trained models' performance on several tweet sentiment analysis tasks.

* ACL2021 

  Access Paper or Ask Questions

<<
156
157
158
159
160
161
162
163
164
165
166
167
168
>>