Alert button
Picture for Ayan Sengupta

Ayan Sengupta

Alert button

Manifold-Preserving Transformers are Effective for Short-Long Range Encoding

Oct 22, 2023
Ayan Sengupta, Md Shad Akhtar, Tanmoy Chakraborty

Figure 1 for Manifold-Preserving Transformers are Effective for Short-Long Range Encoding
Figure 2 for Manifold-Preserving Transformers are Effective for Short-Long Range Encoding
Figure 3 for Manifold-Preserving Transformers are Effective for Short-Long Range Encoding
Figure 4 for Manifold-Preserving Transformers are Effective for Short-Long Range Encoding

Multi-head self-attention-based Transformers have shown promise in different learning tasks. Albeit these models exhibit significant improvement in understanding short-term and long-term contexts from sequences, encoders of Transformers and their variants fail to preserve layer-wise contextual information. Transformers usually project tokens onto sparse manifolds and fail to preserve mathematical equivalence among the token representations. In this work, we propose TransJect, an encoder model that guarantees a theoretical bound for layer-wise distance preservation between a pair of tokens. We propose a simple alternative to dot-product attention to ensure Lipschitz continuity. This allows TransJect to learn injective mappings to transform token representations to different manifolds with similar topology and preserve Euclidean distance between every pair of tokens in subsequent layers. Evaluations across multiple benchmark short- and long-sequence classification tasks show maximum improvements of 6.8% and 5.9%, respectively, over the variants of Transformers. Additionally, TransJect displays 79% better performance than Transformer on the language modeling task. We further highlight the shortcomings of multi-head self-attention from the statistical physics viewpoint. Although multi-head self-attention was incepted to learn different abstraction levels within the networks, our empirical analyses suggest that different attention heads learn randomly and unorderly. In contrast, TransJect adapts a mixture of experts for regularization; these experts are more orderly and balanced and learn different sparse representations from the input sequences. TransJect exhibits very low entropy and can be efficiently scaled to larger depths.

* 17 pages, 7 figures, 5 tables, Findings of the Association for Computational Linguistics: EMNLP2023 
Viaarxiv icon

Persona-aware Generative Model for Code-mixed Language

Sep 06, 2023
Ayan Sengupta, Md Shad Akhtar, Tanmoy Chakraborty

Figure 1 for Persona-aware Generative Model for Code-mixed Language
Figure 2 for Persona-aware Generative Model for Code-mixed Language
Figure 3 for Persona-aware Generative Model for Code-mixed Language
Figure 4 for Persona-aware Generative Model for Code-mixed Language

Code-mixing and script-mixing are prevalent across online social networks and multilingual societies. However, a user's preference toward code-mixing depends on the socioeconomic status, demographics of the user, and the local context, which existing generative models mostly ignore while generating code-mixed texts. In this work, we make a pioneering attempt to develop a persona-aware generative model to generate texts resembling real-life code-mixed texts of individuals. We propose a Persona-aware Generative Model for Code-mixed Generation, PARADOX, a novel Transformer-based encoder-decoder model that encodes an utterance conditioned on a user's persona and generates code-mixed texts without monolingual reference data. We propose an alignment module that re-calibrates the generated sequence to resemble real-life code-mixed texts. PARADOX generates code-mixed texts that are semantically more meaningful and linguistically more valid. To evaluate the personification capabilities of PARADOX, we propose four new metrics -- CM BLEU, CM Rouge-1, CM Rouge-L and CM KS. On average, PARADOX achieves 1.6 points better CM BLEU, 47% better perplexity and 32% better semantic coherence than the non-persona-based counterparts.

* 4 tables, 4 figures 
Viaarxiv icon

A Comprehensive Understanding of Code-mixed Language Semantics using Hierarchical Transformer

Apr 27, 2022
Ayan Sengupta, Tharun Suresh, Md Shad Akhtar, Tanmoy Chakraborty

Figure 1 for A Comprehensive Understanding of Code-mixed Language Semantics using Hierarchical Transformer
Figure 2 for A Comprehensive Understanding of Code-mixed Language Semantics using Hierarchical Transformer
Figure 3 for A Comprehensive Understanding of Code-mixed Language Semantics using Hierarchical Transformer
Figure 4 for A Comprehensive Understanding of Code-mixed Language Semantics using Hierarchical Transformer

Being a popular mode of text-based communication in multilingual communities, code-mixing in online social media has became an important subject to study. Learning the semantics and morphology of code-mixed language remains a key challenge, due to scarcity of data and unavailability of robust and language-invariant representation learning technique. Any morphologically-rich language can benefit from character, subword, and word-level embeddings, aiding in learning meaningful correlations. In this paper, we explore a hierarchical transformer-based architecture (HIT) to learn the semantics of code-mixed languages. HIT consists of multi-headed self-attention and outer product attention components to simultaneously comprehend the semantic and syntactic structures of code-mixed texts. We evaluate the proposed method across 6 Indian languages (Bengali, Gujarati, Hindi, Tamil, Telugu and Malayalam) and Spanish for 9 NLP tasks on 17 datasets. The HIT model outperforms state-of-the-art code-mixed representation learning and multilingual language models in all tasks. We further demonstrate the generalizability of the HIT architecture using masked language modeling-based pre-training, zero-shot learning, and transfer learning approaches. Our empirical results show that the pre-training objectives significantly improve the performance on downstream tasks.

* 12 pages, 1 figure, 11 tables 
Viaarxiv icon

HIT: A Hierarchically Fused Deep Attention Network for Robust Code-mixed Language Representation

May 30, 2021
Ayan Sengupta, Sourabh Kumar Bhattacharjee, Tanmoy Chakraborty, Md Shad Akhtar

Figure 1 for HIT: A Hierarchically Fused Deep Attention Network for Robust Code-mixed Language Representation
Figure 2 for HIT: A Hierarchically Fused Deep Attention Network for Robust Code-mixed Language Representation
Figure 3 for HIT: A Hierarchically Fused Deep Attention Network for Robust Code-mixed Language Representation
Figure 4 for HIT: A Hierarchically Fused Deep Attention Network for Robust Code-mixed Language Representation

Understanding linguistics and morphology of resource-scarce code-mixed texts remains a key challenge in text processing. Although word embedding comes in handy to support downstream tasks for low-resource languages, there are plenty of scopes in improving the quality of language representation particularly for code-mixed languages. In this paper, we propose HIT, a robust representation learning method for code-mixed texts. HIT is a hierarchical transformer-based framework that captures the semantic relationship among words and hierarchically learns the sentence-level semantics using a fused attention mechanism. HIT incorporates two attention modules, a multi-headed self-attention and an outer product attention module, and computes their weighted sum to obtain the attention weights. Our evaluation of HIT on one European (Spanish) and five Indic (Hindi, Bengali, Tamil, Telugu, and Malayalam) languages across four NLP tasks on eleven datasets suggests significant performance improvement against various state-of-the-art systems. We further show the adaptability of learned representation across tasks in a transfer learning setup (with and without fine-tuning).

* 15 pages, 13 tables, 6 Figures. Accepted at ACL-IJCNLP-2021 (Findings) 
Viaarxiv icon

An Embedding-based Joint Sentiment-Topic Model for Short Texts

Mar 26, 2021
Ayan Sengupta, William Scott Paka, Suman Roy, Gaurav Ranjan, Tanmoy Chakraborty

Figure 1 for An Embedding-based Joint Sentiment-Topic Model for Short Texts
Figure 2 for An Embedding-based Joint Sentiment-Topic Model for Short Texts
Figure 3 for An Embedding-based Joint Sentiment-Topic Model for Short Texts
Figure 4 for An Embedding-based Joint Sentiment-Topic Model for Short Texts

Short text is a popular avenue of sharing feedback, opinions and reviews on social media, e-commerce platforms, etc. Many companies need to extract meaningful information (which may include thematic content as well as semantic polarity) out of such short texts to understand users' behaviour. However, obtaining high quality sentiment-associated and human interpretable themes still remains a challenge for short texts. In this paper we develop ELJST, an embedding enhanced generative joint sentiment-topic model that can discover more coherent and diverse topics from short texts. It uses Markov Random Field Regularizer that can be seen as a generalisation of skip-gram based models. Further, it can leverage higher-order semantic information appearing in word embedding, such as self-attention weights in graphical models. Our results show an average improvement of 10% in topic coherence and 5% in topic diversification over baselines. Finally, ELJST helps understand users' behaviour at more granular levels which can be explained. All these can bring significant values to the service and healthcare industries often dealing with customers.

* Accepted in International AAAI Conference on Web and Social Media (ICWSM), 2021 
Viaarxiv icon

An Autonomous Negotiating Agent Framework with Reinforcement Learning Based Strategies and Adaptive Strategy Switching Mechanism

Feb 09, 2021
Ayan Sengupta, Yasser Mohammad, Shinji Nakadai

Figure 1 for An Autonomous Negotiating Agent Framework with Reinforcement Learning Based Strategies and Adaptive Strategy Switching Mechanism
Figure 2 for An Autonomous Negotiating Agent Framework with Reinforcement Learning Based Strategies and Adaptive Strategy Switching Mechanism
Figure 3 for An Autonomous Negotiating Agent Framework with Reinforcement Learning Based Strategies and Adaptive Strategy Switching Mechanism
Figure 4 for An Autonomous Negotiating Agent Framework with Reinforcement Learning Based Strategies and Adaptive Strategy Switching Mechanism

Despite abundant negotiation strategies in literature, the complexity of automated negotiation forbids a single strategy from being dominant against all others in different negotiation scenarios. To overcome this, one approach is to use mixture of experts, but at the same time, one problem of this method is the selection of experts, as this approach is limited by the competency of the experts selected. Another problem with most negotiation strategies is their incapability of adapting to dynamic variation of the opponent's behaviour within a single negotiation session resulting in poor performance. This work focuses on both, solving the problem of expert selection and adapting to the opponent's behaviour with our Autonomous Negotiating Agent Framework. This framework allows real-time classification of opponent's behaviour and provides a mechanism to select, switch or combine strategies within a single negotiation session. Additionally, our framework has a reviewer component which enables self-enhancement capability by deciding to include new strategies or replace old ones with better strategies periodically. We demonstrate an instance of our framework by implementing maximum entropy reinforcement learning based strategies with a deep learning based opponent classifier. Finally, we evaluate the performance of our agent against state-of-the-art negotiators under varied negotiation scenarios.

* Accepted at AAMAS 2021 
Viaarxiv icon

Fault Detection Engine in Intelligent Predictive Analytics Platform for DCIM

Oct 16, 2016
Bodhisattwa Prasad Majumder, Ayan Sengupta, Sajal jain, Parikshit Bhaduri

Figure 1 for Fault Detection Engine in Intelligent Predictive Analytics Platform for DCIM
Figure 2 for Fault Detection Engine in Intelligent Predictive Analytics Platform for DCIM
Figure 3 for Fault Detection Engine in Intelligent Predictive Analytics Platform for DCIM
Figure 4 for Fault Detection Engine in Intelligent Predictive Analytics Platform for DCIM

With the advancement of huge data generation and data handling capability, Machine Learning and Probabilistic modelling enables an immense opportunity to employ predictive analytics platform in high security critical industries namely data centers, electricity grids, utilities, airport etc. where downtime minimization is one of the primary objectives. This paper proposes a novel, complete architecture of an intelligent predictive analytics platform, Fault Engine, for huge device network connected with electrical/information flow. Three unique modules, here proposed, seamlessly integrate with available technology stack of data handling and connect with middleware to produce online intelligent prediction in critical failure scenarios. The Markov Failure module predicts the severity of a failure along with survival probability of a device at any given instances. The Root Cause Analysis model indicates probable devices as potential root cause employing Bayesian probability assignment and topological sort. Finally, a community detection algorithm produces correlated clusters of device in terms of failure probability which will further narrow down the search space of finding route cause. The whole Engine has been tested with different size of network with simulated failure environments and shows its potential to be scalable in real-time implementation.

* Accepted in 4th International Conference on Business Analytics and Intelligence (ICBAI 2016) 
Viaarxiv icon