Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Sentiment": models, code, and papers

Incorporating Domain Knowledge into Medical NLI using Knowledge Graphs

Aug 31, 2019
Soumya Sharma, Bishal Santra, Abhik Jana, T. Y. S. S. Santosh, Niloy Ganguly, Pawan Goyal

Recently, biomedical version of embeddings obtained from language models such as BioELMo have shown state-of-the-art results for the textual inference task in the medical domain. In this paper, we explore how to incorporate structured domain knowledge, available in the form of a knowledge graph (UMLS), for the Medical NLI task. Specifically, we experiment with fusing embeddings obtained from knowledge graph with the state-of-the-art approaches for NLI task (ESIM model). We also experiment with fusing the domain-specific sentiment information for the task. Experiments conducted on MedNLI dataset clearly show that this strategy improves the baseline BioELMo architecture for the Medical NLI task.

* EMNLP 2019 accepted short paper 

  Access Paper or Ask Questions

LSICC: A Large Scale Informal Chinese Corpus

Nov 26, 2018
Jianyu Zhao, Zhuoran Ji

Deep learning based natural language processing model is proven powerful, but need large-scale dataset. Due to the significant gap between the real-world tasks and existing Chinese corpus, in this paper, we introduce a large-scale corpus of informal Chinese. This corpus contains around 37 million book reviews and 50 thousand netizen's comments to the news. We explore the informal words frequencies of the corpus and show the difference between our corpus and the existing ones. The corpus can be further used to train deep learning based natural language processing tasks such as Chinese word segmentation, sentiment analysis.


  Access Paper or Ask Questions

Occam's Gates

Jun 27, 2015
Jonathan Raiman, Szymon Sidor

We present a complimentary objective for training recurrent neural networks (RNN) with gating units that helps with regularization and interpretability of the trained model. Attention-based RNN models have shown success in many difficult sequence to sequence classification problems with long and short term dependencies, however these models are prone to overfitting. In this paper, we describe how to regularize these models through an L1 penalty on the activation of the gating units, and show that this technique reduces overfitting on a variety of tasks while also providing to us a human-interpretable visualization of the inputs used by the network. These tasks include sentiment analysis, paraphrase recognition, and question answering.

* In review at NIPS 

  Access Paper or Ask Questions

Polite Emotional Dialogue Acts for Conversational Analysis in Daily Dialog Data

Dec 28, 2021
Chandrakant Bothe

Many socio-linguistic cues are used in the conversational analysis, such as emotion, sentiment, and dialogue acts. One of the fundamental social cues is politeness, which linguistically possesses properties useful in conversational analysis. This short article presents some of the brief findings of polite emotional dialogue acts, where we can correlate the relational bonds between these socio-linguistics cues. We found that the utterances with emotion classes Anger and Disgust are more likely to be impolite while Happiness and Sadness to be polite. Similar phenomenon occurs with dialogue acts, Inform and Commissive contain many polite utterances than Question and Directive. Finally, we will conclude on the future work of these findings.


  Access Paper or Ask Questions

Polite Emotional Dialogue Acts for Conversational Analysis in Dialy Dialog Data

Dec 27, 2021
Chandrakant Bothe

Many socio-linguistic cues are used in the conversational analysis, such as emotion, sentiment, and dialogue acts. One of the fundamental social cues is politeness, which linguistically possesses properties useful in conversational analysis. This short article presents some of the brief findings of polite emotional dialogue acts, where we can correlate the relational bonds between these socio-linguistics cues. We found that the utterances with emotion classes Anger and Disgust are more likely to be impolite while Happiness and Sadness to be polite. Similar phenomenon occurs with dialogue acts, Inform and Commissive contain many polite utterances than Question and Directive. Finally, we will conclude on the future work of these findings.


  Access Paper or Ask Questions

Task-Specific Pre-Training and Cross Lingual Transfer for Code-Switched Data

Feb 24, 2021
Akshat Gupta, Sai Krishna Rallabandi, Alan Black

Using task-specific pre-training and leveraging cross-lingual transfer are two of the most popular ways to handle code-switched data. In this paper, we aim to compare the effects of both for the task of sentiment analysis. We work with two Dravidian Code-Switched languages - Tamil-Engish and Malayalam-English and four different BERT based models. We compare the effects of task-specific pre-training and cross-lingual transfer and find that task-specific pre-training results in superior zero-shot and supervised performance when compared to performance achieved by leveraging cross-lingual transfer from multilingual BERT models.


  Access Paper or Ask Questions

Deep Learning Based Text Classification: A Comprehensive Review

Apr 06, 2020
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, Jianfeng Gao

Deep learning based models have surpassed classical machine learning based approaches in various text classification tasks, including sentiment analysis, news categorization, question answering, and natural language inference. In this work, we provide a detailed review of more than 150 deep learning based models for text classification developed in recent years, and discuss their technical contributions, similarities, and strengths. We also provide a summary of more than 40 popular datasets widely used for text classification. Finally, we provide a quantitative analysis of the performance of different deep learning models on popular benchmarks, and discuss future research directions.


  Access Paper or Ask Questions

Fraud detection in telephone conversations for financial services using linguistic features

Dec 10, 2019
Nikesh Bajaj, Tracy Goodluck Constance, Marvin Rajwadi, Julie Wall, Mansour Moniri, Cornelius Glackin, Nigel Cannings, Chris Woodruff, James Laird

Detecting the elements of deception in a conversation is one of the most challenging problems for the AI community. It becomes even more difficult to design a transparent system, which is fully explainable and satisfies the need for financial and legal services to be deployed. This paper presents an approach for fraud detection in transcribed telephone conversations using linguistic features. The proposed approach exploits the syntactic and semantic information of the transcription to extract both the linguistic markers and the sentiment of the customer's response. We demonstrate the results on real-world financial services data using simple, robust and explainable classifiers such as Naive Bayes, Decision Tree, Nearest Neighbours, and Support Vector Machines.

* Published - 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), AI for Social Good Workshop, Vancouver, Canada 

  Access Paper or Ask Questions

Explanatory Masks for Neural Network Interpretability

Nov 15, 2019
Lawrence Phillips, Garrett Goh, Nathan Hodas

Neural network interpretability is a vital component for applications across a wide variety of domains. In such cases it is often useful to analyze a network which has already been trained for its specific purpose. In this work, we develop a method to produce explanation masks for pre-trained networks. The mask localizes the most important aspects of each input for prediction of the original network. Masks are created by a secondary network whose goal is to create as small an explanation as possible while still preserving the predictive accuracy of the original network. We demonstrate the applicability of our method for image classification with CNNs, sentiment analysis with RNNs, and chemical property prediction with mixed CNN/RNN architectures.

* Presented at IJCAI-18 Workshop on Explainable Artificial Intelligence (XAI) 

  Access Paper or Ask Questions

Language Independent Sequence Labelling for Opinion Target Extraction

Jan 28, 2019
Rodrigo Agerri, German Rigau

In this research note we present a language independent system to model Opinion Target Extraction (OTE) as a sequence labelling task. The system consists of a combination of clustering features implemented on top of a simple set of shallow local features. Experiments on the well known Aspect Based Sentiment Analysis (ABSA) benchmarks show that our approach is very competitive across languages, obtaining best results for six languages in seven different datasets. Furthermore, the results provide further insights into the behaviour of clustering features for sequence labelling tasks. The system and models generated in this work are available for public use and to facilitate reproducibility of results.

* Artificial Intelligence (2018), 268: 65-85 
* 17 pages 

  Access Paper or Ask Questions

<<
149
150
151
152
153
154
155
156
157
158
159
160
161
>>