Contextual representations learned by language models can often encode undesirable attributes, like demographic associations of the users, while being trained for an unrelated target task. We aim to scrub such undesirable attributes and learn fair representations while maintaining performance on the target task. In this paper, we present an adversarial learning framework "Adversarial Scrubber" (ADS), to debias contextual representations. We perform theoretical analysis to show that our framework converges without leaking demographic information under certain conditions. We extend previous evaluation techniques by evaluating debiasing performance using Minimum Description Length (MDL) probing. Experimental evaluations on 8 datasets show that ADS generates representations with minimal information about demographic attributes while being maximally informative about the target task.
Sarcasm detection is important for several NLP tasks such as sentiment identification in product reviews, user feedback, and online forums. It is a challenging task requiring a deep understanding of language, context, and world knowledge. In this paper, we investigate whether incorporating commonsense knowledge helps in sarcasm detection. For this, we incorporate commonsense knowledge into the prediction process using a graph convolution network with pre-trained language model embeddings as input. Our experiments with three sarcasm detection datasets indicate that the approach does not outperform the baseline model. We perform an exhaustive set of experiments to analyze where commonsense support adds value and where it hurts classification. Our implementation is publicly available at: https://github.com/brcsomnath/commonsense-sarcasm.
The task of organizing a shuffled set of sentences into a coherent text is important in NLP and has been used to evaluate a machine's understanding of causal and temporal relations. We present Reorder-BART (RE-BART), a sentence ordering framework which leverages a pre-trained transformer-based model to identify a coherent order for a given set of shuffled sentences. We reformulate the task as a conditional text-to-marker generation setup where the input is a set of shuffled sentences with sentence-specific markers and output is a sequence of position markers of the ordered text. Our framework achieves the state-of-the-art performance across six datasets in Perfect Match Ratio (PMR) and Kendall's tau ($\tau$) metric. We perform evaluations in a zero-shot setting, showcasing that our model is able to generalize well across other datasets. We additionally perform a series of experiments to understand the functioning and explore the limitations of our framework.
Machine Learning has been the quintessential solution for many AI problems, but learning is still heavily dependent on the specific training data. Some learning models can be incorporated with a prior knowledge in the Bayesian set up, but these learning models do not have the ability to access any organised world knowledge on demand. In this work, we propose to enhance learning models with world knowledge in the form of Knowledge Graph (KG) fact triples for Natural Language Processing (NLP) tasks. Our aim is to develop a deep learning model that can extract relevant prior support facts from knowledge graphs depending on the task using attention mechanism. We introduce a convolution-based model for learning representations of knowledge graph entity and relation clusters in order to reduce the attention space. We show that the proposed method is highly scalable to the amount of prior information that has to be processed and can be applied to any generic NLP task. Using this method we show significant improvement in performance for text classification with News20, DBPedia datasets and natural language inference with Stanford Natural Language Inference (SNLI) dataset. We also demonstrate that a deep learning model can be trained well with substantially less amount of labeled training data, when it has access to organised world knowledge in the form of knowledge graph.
Supervised learning models are typically trained on a single dataset and the performance of these models rely heavily on the size of the dataset, i.e., amount of data available with the ground truth. Learning algorithms try to generalize solely based on the data that is presented with during the training. In this work, we propose an inductive transfer learning method that can augment learning models by infusing similar instances from different learning tasks in the Natural Language Processing (NLP) domain. We propose to use instance representations from a source dataset, \textit{without inheriting anything} from the source learning model. Representations of the instances of \textit{source} \& \textit{target} datasets are learned, retrieval of relevant source instances is performed using soft-attention mechanism and \textit{locality sensitive hashing}, and then, augmented into the model during training on the target dataset. Our approach simultaneously exploits the local \textit{instance level information} as well as the macro statistical viewpoint of the dataset. Using this approach we have shown significant improvements for three major news classification datasets over the baseline. Experimental evaluations also show that the proposed approach reduces dependency on labeled data by a significant margin for comparable performance. With our proposed cross dataset learning procedure we show that one can achieve competitive/better performance than learning from a single dataset.
Global Average Pooling (GAP) [4] has been used previously to generate class activation for image classification tasks. The motivation behind SIMILARnet comes from the fact that the convolutional filters possess position information of the essential features and hence, combination of the feature maps could help us locate the class instances in an image. We propose a biologically inspired model that is free of differential connections and doesn't require separate training thereby reducing computation overhead. Our novel architecture generates promising results and unlike existing methods, the model is not sensitive to the input image size, thus promising wider application. Codes for the experiment and illustrations can be found at: https://github.com/brcsomnath/Advanced-GAP .
Handwriting is a skill learned by humans from a very early age. The ability to develop one's own unique handwriting as well as mimic another person's handwriting is a task learned by the brain with practice. This paper deals with this very problem where an intelligent system tries to learn the handwriting of an entity using Generative Adversarial Networks (GANs). We propose a modified architecture of DCGAN (Radford, Metz, and Chintala 2015) to achieve this. We also discuss about applying reinforcement learning techniques to achieve faster learning. Our algorithm hopes to give new insights in this area and its uses include identification of forged documents, signature verification, computer generated art, digitization of documents among others. Our early implementation of the algorithm illustrates a good performance with MNIST datasets.
Autonomous driving is one of the most recent topics of interest which is aimed at replicating human driving behavior keeping in mind the safety issues. We approach the problem of learning synthetic driving using generative neural networks. The main idea is to make a controller trainer network using images plus key press data to mimic human learning. We used the architecture of a stable GAN to make predictions between driving scenes using key presses. We train our model on one video game (Road Rash) and tested the accuracy and compared it by running the model on other maps in Road Rash to determine the extent of learning.