Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Sentiment": models, code, and papers

Comparative Study of Language Models on Cross-Domain Data with Model Agnostic Explainability

Sep 09, 2020
Mayank Chhipa, Hrushikesh Mahesh Vazurkar, Abhijeet Kumar, Mridul Mishra

With the recent influx of bidirectional contextualized transformer language models in the NLP, it becomes a necessity to have a systematic comparative study of these models on variety of datasets. Also, the performance of these language models has not been explored on non-GLUE datasets. The study presented in paper compares the state-of-the-art language models - BERT, ELECTRA and its derivatives which include RoBERTa, ALBERT and DistilBERT. We conducted experiments by finetuning these models for cross domain and disparate data and penned an in-depth analysis of model's performances. Moreover, an explainability of language models coherent with pretraining is presented which verifies the context capturing capabilities of these models through a model agnostic approach. The experimental results establish new state-of-the-art for Yelp 2013 rating classification task and Financial Phrasebank sentiment detection task with 69% accuracy and 88.2% accuracy respectively. Finally, the study conferred here can greatly assist industry researchers in choosing the language model effectively in terms of performance or compute efficiency.

* 6 pages Source code at https://github.com/fidelity/classitransformers PyPi https://pypi.org/project/classitransformers/ 

  Access Paper or Ask Questions

ParsBERT: Transformer-based Model for Persian Language Understanding

May 31, 2020
Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri

The surge of pre-trained language models has begun a new era in the field of Natural Language Processing (NLP) by allowing us to build powerful language models. Among these models, Transformer-based models such as BERT have become increasingly popular due to their state-of-the-art performance. However, these models are usually focused on English, leaving other languages to multilingual models with limited resources. This paper proposes a monolingual BERT for the Persian language (ParsBERT), which shows its state-of-the-art performance compared to other architectures and multilingual models. Also, since the amount of data available for NLP tasks in Persian is very restricted, a massive dataset for different NLP tasks as well as pre-training the model is composed. ParsBERT obtains higher scores in all datasets, including existing ones as well as composed ones and improves the state-of-the-art performance by outperforming both multilingual BERT and other prior works in Sentiment Analysis, Text Classification and Named Entity Recognition tasks.

* 10 pages, 5 figures, 7 tables, table 7 corrected and some refs related to table 7 

  Access Paper or Ask Questions

Beyond Accuracy: Behavioral Testing of NLP models with CheckList

May 08, 2020
Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, Sameer Singh

Although measuring held-out accuracy has been the primary approach to evaluate generalization, it often overestimates the performance of NLP models, while alternative approaches for evaluating models either focus on individual tasks or on specific behaviors. Inspired by principles of behavioral testing in software engineering, we introduce CheckList, a task-agnostic methodology for testing NLP models. CheckList includes a matrix of general linguistic capabilities and test types that facilitate comprehensive test ideation, as well as a software tool to generate a large and diverse number of test cases quickly. We illustrate the utility of CheckList with tests for three tasks, identifying critical failures in both commercial and state-of-art models. In a user study, a team responsible for a commercial sentiment analysis model found new and actionable bugs in an extensively tested model. In another user study, NLP practitioners with CheckList created twice as many tests, and found almost three times as many bugs as users without it.

* Association for Computational Linguistics (ACL), 2020 

  Access Paper or Ask Questions

User Generated Data: Achilles' Heel of BERT

Apr 21, 2020
Ankit Kumar, Piyush Makhija, Anuj Gupta

Owing to BERT's phenomenal success on various NLP tasks and benchmark datasets, industry practitioners have started to experiment with incorporating BERT to build applications to solve industry use cases. Industrial NLP applications are known to deal with much more noisy data as compared to benchmark datasets. In this work we systematically show that when the text data is noisy, there is a significant degradation in the performance of BERT. While this work is motivated from our business use case of building NLP applications for user generated text data which is known to be very noisy, our findings are applicable across various use cases in the industry. Specifically, we show that BERT's performance on fundamental tasks like sentiment analysis and textual similarity drops significantly as we introduce noise in data in the form of spelling mistakes and typos. For our experiments we use three well known datasets - IMDB movie reviews, SST-2 and STS-B to measure the performance. Further, we identify the shortcomings in the BERT pipeline that are responsible for this drop in performance.

* 7 pages, 2 figures, 6 plots 

  Access Paper or Ask Questions

Reinforcement Learning Based Text Style Transfer without Parallel Training Corpus

Apr 08, 2019
Hongyu Gong, Suma Bhat, Lingfei Wu, Jinjun Xiong, Wen-mei Hwu

Text style transfer rephrases a text from a source style (e.g., informal) to a target style (e.g., formal) while keeping its original meaning. Despite the success existing works have achieved using a parallel corpus for the two styles, transferring text style has proven significantly more challenging when there is no parallel training corpus. In this paper, we address this challenge by using a reinforcement-learning-based generator-evaluator architecture. Our generator employs an attention-based encoder-decoder to transfer a sentence from the source style to the target style. Our evaluator is an adversarially trained style discriminator with semantic and syntactic constraints that score the generated sentence for style, meaning preservation, and fluency. Experimental results on two different style transfer tasks (sentiment transfer and formality transfer) show that our model outperforms state-of-the-art approaches. Furthermore, we perform a manual evaluation that demonstrates the effectiveness of the proposed method using subjective metrics of generated text quality.


  Access Paper or Ask Questions

Anxious Depression Prediction in Real-time Social Data

Mar 25, 2019
Akshi Kumar, Aditi Sharma, Anshika Arora

Mental well-being and social media have been closely related domains of study. In this research a novel model, AD prediction model, for anxious depression prediction in real-time tweets is proposed. This mixed anxiety-depressive disorder is a predominantly associated with erratic thought process, restlessness and sleeplessness. Based on the linguistic cues and user posting patterns, the feature set is defined using a 5-tuple vector . An anxiety-related lexicon is built to detect the presence of anxiety indicators. Time and frequency of tweet is analyzed for irregularities and opinion polarity analytics is done to find inconsistencies in posting behaviour. The model is trained using three classifiers (multinomial na\"ive bayes, gradient boosting, and random forest) and majority voting using an ensemble voting classifier is done. Preliminary results are evaluated for tweets of sampled 100 users and the proposed model achieves a classification accuracy of 85.09%.


  Access Paper or Ask Questions

Quantum-inspired Complex Word Embedding

May 29, 2018
Qiuchi Li, Sagar Uprety, Benyou Wang, Dawei Song

A challenging task for word embeddings is to capture the emergent meaning or polarity of a combination of individual words. For example, existing approaches in word embeddings will assign high probabilities to the words "Penguin" and "Fly" if they frequently co-occur, but it fails to capture the fact that they occur in an opposite sense - Penguins do not fly. We hypothesize that humans do not associate a single polarity or sentiment to each word. The word contributes to the overall polarity of a combination of words depending upon which other words it is combined with. This is analogous to the behavior of microscopic particles which exist in all possible states at the same time and interfere with each other to give rise to new states depending upon their relative phases. We make use of the Hilbert Space representation of such particles in Quantum Mechanics where we subscribe a relative phase to each word, which is a complex number, and investigate two such quantum inspired models to derive the meaning of a combination of words. The proposed models achieve better performances than state-of-the-art non-quantum models on the binary sentence classification task.

* This paper has been accepted by the 3rd Workshop on Representation Learning for NLP (RepL4NLP) 

  Access Paper or Ask Questions

APR: Architectural Pattern Recommender

Mar 23, 2018
Shipra Sharma, Balwinder Sodhi

This paper proposes Architectural Pattern Recommender (APR) system which helps in such architecture selection process. Main contribution of this work is in replacing the manual effort required to identify and analyse relevant architectural patterns in context of a particular set of software requirements. Key input to APR is a set of architecturally significant use cases concerning the application being developed. Central idea of APR's design is two folds: a) transform the unstructured information about software architecture design into a structured form which is suitable for recognizing textual entailment between a requirement scenario and a potential architectural pattern. b) leverage the rich experiential knowledge embedded in discussions on professional developer support forums such as Stackoverflow to check the sentiment about a design decision. APR makes use of both the above elements to identify a suitable architectural pattern and assess its suitability for a given set of requirements. Efficacy of APR has been evaluated by comparing its recommendations for "ground truth" scenarios (comprising of applications whose architecture is well known).

* Sharma, S., & Sodhi, B. (2017, April). APR: architectural pattern recommender. In Proceedings of the Symposium on Applied Computing (pp. 1225-1230). ACM 
* 6 Pages, 1 Figure. Published in SAC 2017 in Software Engineering Track 

  Access Paper or Ask Questions

Identity-sensitive Word Embedding through Heterogeneous Networks

Nov 29, 2016
Jian Tang, Meng Qu, Qiaozhu Mei

Most existing word embedding approaches do not distinguish the same words in different contexts, therefore ignoring their contextual meanings. As a result, the learned embeddings of these words are usually a mixture of multiple meanings. In this paper, we acknowledge multiple identities of the same word in different contexts and learn the \textbf{identity-sensitive} word embeddings. Based on an identity-labeled text corpora, a heterogeneous network of words and word identities is constructed to model different-levels of word co-occurrences. The heterogeneous network is further embedded into a low-dimensional space through a principled network embedding approach, through which we are able to obtain the embeddings of words and the embeddings of word identities. We study three different types of word identities including topics, sentiments and categories. Experimental results on real-world data sets show that the identity-sensitive word embeddings learned by our approach indeed capture different meanings of words and outperforms competitive methods on tasks including text classification and word similarity computation.


  Access Paper or Ask Questions

<<
198
199
200
201
202
203
204
205
206
207
208
209
210
>>