Alert button
Picture for Sudipta Kar

Sudipta Kar

Alert button

SemEval-2023 Task 2: Fine-grained Multilingual Named Entity Recognition (MultiCoNER 2)

May 25, 2023
Besnik Fetahu, Sudipta Kar, Zhiyu Chen, Oleg Rokhlenko, Shervin Malmasi

Figure 1 for SemEval-2023 Task 2: Fine-grained Multilingual Named Entity Recognition (MultiCoNER 2)
Figure 2 for SemEval-2023 Task 2: Fine-grained Multilingual Named Entity Recognition (MultiCoNER 2)
Figure 3 for SemEval-2023 Task 2: Fine-grained Multilingual Named Entity Recognition (MultiCoNER 2)
Figure 4 for SemEval-2023 Task 2: Fine-grained Multilingual Named Entity Recognition (MultiCoNER 2)

We present the findings of SemEval-2023 Task 2 on Fine-grained Multilingual Named Entity Recognition (MultiCoNER 2). Divided into 13 tracks, the task focused on methods to identify complex fine-grained named entities (like WRITTENWORK, VEHICLE, MUSICALGRP) across 12 languages, in both monolingual and multilingual scenarios, as well as noisy settings. The task used the MultiCoNER V2 dataset, composed of 2.2 million instances in Bangla, Chinese, English, Farsi, French, German, Hindi, Italian., Portuguese, Spanish, Swedish, and Ukrainian. MultiCoNER 2 was one of the most popular tasks of SemEval-2023. It attracted 842 submissions from 47 teams, and 34 teams submitted system papers. Results showed that complex entity types such as media titles and product names were the most challenging. Methods fusing external knowledge into transformer models achieved the best performance, and the largest gains were on the Creative Work and Group classes, which are still challenging even with external knowledge. Some fine-grained classes proved to be more challenging than others, such as SCIENTIST, ARTWORK, and PRIVATECORP. We also observed that noisy data has a significant impact on model performance, with an average drop of 10% on the noisy subset. The task highlights the need for future research on improving NER robustness on noisy data containing complex entities.

* SemEval-2023 (co-located with ACL-2023 in Toronto, Canada) 
Viaarxiv icon

Preventing Catastrophic Forgetting in Continual Learning of New Natural Language Tasks

Feb 22, 2023
Sudipta Kar, Giuseppe Castellucci, Simone Filice, Shervin Malmasi, Oleg Rokhlenko

Figure 1 for Preventing Catastrophic Forgetting in Continual Learning of New Natural Language Tasks
Figure 2 for Preventing Catastrophic Forgetting in Continual Learning of New Natural Language Tasks
Figure 3 for Preventing Catastrophic Forgetting in Continual Learning of New Natural Language Tasks
Figure 4 for Preventing Catastrophic Forgetting in Continual Learning of New Natural Language Tasks

Multi-Task Learning (MTL) is widely-accepted in Natural Language Processing as a standard technique for learning multiple related tasks in one model. Training an MTL model requires having the training data for all tasks available at the same time. As systems usually evolve over time, (e.g., to support new functionalities), adding a new task to an existing MTL model usually requires retraining the model from scratch on all the tasks and this can be time-consuming and computationally expensive. Moreover, in some scenarios, the data used to train the original training may be no longer available, for example, due to storage or privacy concerns. In this paper, we approach the problem of incrementally expanding MTL models' capability to solve new tasks over time by distilling the knowledge of an already trained model on n tasks into a new one for solving n+1 tasks. To avoid catastrophic forgetting, we propose to exploit unlabeled data from the same distributions of the old tasks. Our experiments on publicly available benchmarks show that such a technique dramatically benefits the distillation by preserving the already acquired knowledge (i.e., preventing up to 20% performance drops on old tasks) while obtaining good performance on the incrementally added tasks. Further, we also show that our approach is beneficial in practical settings by using data from a leading voice assistant.

* KDD 2022 
Viaarxiv icon

Learning to Retrieve Engaging Follow-Up Queries

Feb 21, 2023
Christopher Richardson, Sudipta Kar, Anjishnu Kumar, Anand Ramachandran, Omar Zia Khan, Zeynab Raeesy, Abhinav Sethy

Figure 1 for Learning to Retrieve Engaging Follow-Up Queries
Figure 2 for Learning to Retrieve Engaging Follow-Up Queries
Figure 3 for Learning to Retrieve Engaging Follow-Up Queries
Figure 4 for Learning to Retrieve Engaging Follow-Up Queries

Open domain conversational agents can answer a broad range of targeted queries. However, the sequential nature of interaction with these systems makes knowledge exploration a lengthy task which burdens the user with asking a chain of well phrased questions. In this paper, we present a retrieval based system and associated dataset for predicting the next questions that the user might have. Such a system can proactively assist users in knowledge exploration leading to a more engaging dialog. The retrieval system is trained on a dataset which contains ~14K multi-turn information-seeking conversations with a valid follow-up question and a set of invalid candidates. The invalid candidates are generated to simulate various syntactic and semantic confounders such as paraphrases, partial entity match, irrelevant entity, and ASR errors. We use confounder specific techniques to simulate these negative examples on the OR-QuAC dataset and develop a dataset called the Follow-up Query Bank (FQ-Bank). Then, we train ranking models on FQ-Bank and present results comparing supervised and unsupervised approaches. The results suggest that we can retrieve the valid follow-ups by ranking them in higher positions compared to confounders, but further knowledge grounding can improve ranking performance.

* EACL 2023 
Viaarxiv icon

MultiCoNER: A Large-scale Multilingual dataset for Complex Named Entity Recognition

Aug 30, 2022
Shervin Malmasi, Anjie Fang, Besnik Fetahu, Sudipta Kar, Oleg Rokhlenko

Figure 1 for MultiCoNER: A Large-scale Multilingual dataset for Complex Named Entity Recognition
Figure 2 for MultiCoNER: A Large-scale Multilingual dataset for Complex Named Entity Recognition
Figure 3 for MultiCoNER: A Large-scale Multilingual dataset for Complex Named Entity Recognition
Figure 4 for MultiCoNER: A Large-scale Multilingual dataset for Complex Named Entity Recognition

We present MultiCoNER, a large multilingual dataset for Named Entity Recognition that covers 3 domains (Wiki sentences, questions, and search queries) across 11 languages, as well as multilingual and code-mixing subsets. This dataset is designed to represent contemporary challenges in NER, including low-context scenarios (short and uncased text), syntactically complex entities like movie titles, and long-tail entity distributions. The 26M token dataset is compiled from public resources using techniques such as heuristic-based sentence sampling, template extraction and slotting, and machine translation. We applied two NER models on our dataset: a baseline XLM-RoBERTa model, and a state-of-the-art GEMNET model that leverages gazetteers. The baseline achieves moderate performance (macro-F1=54%), highlighting the difficulty of our data. GEMNET, which uses gazetteers, improvement significantly (average improvement of macro-F1=+30%). MultiCoNER poses challenges even for large pre-trained language models, and we believe that it can help further research in building robust NER systems. MultiCoNER is publicly available at https://registry.opendata.aws/multiconer/ and we hope that this resource will help advance research in various aspects of NER.

* Accepted at COLING 2022 
Viaarxiv icon

SemEval-2020 Task 9: Overview of Sentiment Analysis of Code-Mixed Tweets

Aug 10, 2020
Parth Patwa, Gustavo Aguilar, Sudipta Kar, Suraj Pandey, Srinivas PYKL, Björn Gambäck, Tanmoy Chakraborty, Thamar Solorio, Amitava Das

Figure 1 for SemEval-2020 Task 9: Overview of Sentiment Analysis of Code-Mixed Tweets
Figure 2 for SemEval-2020 Task 9: Overview of Sentiment Analysis of Code-Mixed Tweets
Figure 3 for SemEval-2020 Task 9: Overview of Sentiment Analysis of Code-Mixed Tweets
Figure 4 for SemEval-2020 Task 9: Overview of Sentiment Analysis of Code-Mixed Tweets

In this paper, we present the results of the SemEval-2020 Task 9 on Sentiment Analysis of Code-Mixed Tweets (SentiMix 2020). We also release and describe our Hinglish (Hindi-English) and Spanglish (Spanish-English) corpora annotated with word-level language identification and sentence-level sentiment labels. These corpora are comprised of 20K and 19K examples, respectively. The sentiment labels are - Positive, Negative, and Neutral. SentiMix attracted 89 submissions in total including 61 teams that participated in the Hinglish contest and 28 submitted systems to the Spanglish competition. The best performance achieved was 75.0% F1 score for Hinglish and 80.6% F1 for Spanglish. We observe that BERT-like models and ensemble methods are the most common and successful approaches among the participants.

* Accepted at SemEval-2020, COLING 
Viaarxiv icon

LinCE: A Centralized Benchmark for Linguistic Code-switching Evaluation

May 09, 2020
Gustavo Aguilar, Sudipta Kar, Thamar Solorio

Figure 1 for LinCE: A Centralized Benchmark for Linguistic Code-switching Evaluation
Figure 2 for LinCE: A Centralized Benchmark for Linguistic Code-switching Evaluation
Figure 3 for LinCE: A Centralized Benchmark for Linguistic Code-switching Evaluation
Figure 4 for LinCE: A Centralized Benchmark for Linguistic Code-switching Evaluation

Recent trends in NLP research have raised an interest in linguistic code-switching (CS); modern approaches have been proposed to solve a wide range of NLP tasks on multiple language pairs. Unfortunately, these proposed methods are hardly generalizable to different code-switched languages. In addition, it is unclear whether a model architecture is applicable for a different task while still being compatible with the code-switching setting. This is mainly because of the lack of a centralized benchmark and the sparse corpora that researchers employ based on their specific needs and interests. To facilitate research in this direction, we propose a centralized benchmark for Linguistic Code-switching Evaluation (LinCE) that combines ten corpora covering four different code-switched language pairs (i.e., Spanish-English, Nepali-English, Hindi-English, and Modern Standard Arabic-Egyptian Arabic) and four tasks (i.e., language identification, named entity recognition, part-of-speech tagging, and sentiment analysis). As part of the benchmark centralization effort, we provide an online platform at ritual.uh.edu/lince, where researchers can submit their results while comparing with others in real-time. In addition, we provide the scores of different popular models, including LSTM, ELMo, and multilingual BERT so that the NLP community can compare against state-of-the-art systems. LinCE is a continuous effort, and we will expand it with more low-resource languages and tasks.

* Accepted to LREC 2020 
Viaarxiv icon

BanFakeNews: A Dataset for Detecting Fake News in Bangla

Apr 19, 2020
Md Zobaer Hossain, Md Ashraful Rahman, Md Saiful Islam, Sudipta Kar

Figure 1 for BanFakeNews: A Dataset for Detecting Fake News in Bangla
Figure 2 for BanFakeNews: A Dataset for Detecting Fake News in Bangla
Figure 3 for BanFakeNews: A Dataset for Detecting Fake News in Bangla
Figure 4 for BanFakeNews: A Dataset for Detecting Fake News in Bangla

Observing the damages that can be done by the rapid propagation of fake news in various sectors like politics and finance, automatic identification of fake news using linguistic analysis has drawn the attention of the research community. However, such methods are largely being developed for English where low resource languages remain out of the focus. But the risks spawned by fake and manipulative news are not confined by languages. In this work, we propose an annotated dataset of ~50K news that can be used for building automated fake news detection systems for a low resource language like Bangla. Additionally, we provide an analysis of the dataset and develop a benchmark system with state of the art NLP techniques to identify Bangla fake news. To create this system, we explore traditional linguistic features and neural network based methods. We expect this dataset will be a valuable resource for building technologies to prevent the spreading of fake news and contribute in research with low resource languages.

* LREC 2020 
Viaarxiv icon

Attending the Emotions to Detect Online Abusive Language

Sep 06, 2019
Niloofar Safi Samghabadi, Afsheen Hatami, Mahsa Shafaei, Sudipta Kar, Thamar Solorio

Figure 1 for Attending the Emotions to Detect Online Abusive Language
Figure 2 for Attending the Emotions to Detect Online Abusive Language
Figure 3 for Attending the Emotions to Detect Online Abusive Language
Figure 4 for Attending the Emotions to Detect Online Abusive Language

In recent years, abusive behavior has become a serious issue in online social networks. In this paper, we present a new corpus from a semi-anonymous social media platform, which contains the instances of offensive and neutral classes. We introduce a single deep neural architecture that considers both local and sequential information from the text in order to detect abusive language. Along with this model, we introduce a new attention mechanism called emotion-aware attention. This mechanism utilizes the emotions behind the text to find the most important words within that text. We experiment with this model on our dataset and later present the analysis. Additionally, we evaluate our proposed method on different corpora and show new state-of-the-art results with respect to offensive language detection.

Viaarxiv icon

Multi-view Characterization of Stories from Narratives and Reviews using Multi-label Ranking

Aug 24, 2019
Sudipta Kar, Gustavo Aguilar, Thamar Solorio

Figure 1 for Multi-view Characterization of Stories from Narratives and Reviews using Multi-label Ranking
Figure 2 for Multi-view Characterization of Stories from Narratives and Reviews using Multi-label Ranking
Figure 3 for Multi-view Characterization of Stories from Narratives and Reviews using Multi-label Ranking
Figure 4 for Multi-view Characterization of Stories from Narratives and Reviews using Multi-label Ranking

This paper considers the problem of characterizing stories by inferring attributes like theme and genre using the written narrative and user reviews. We experiment with a multi-label dataset of narratives representing the story of movies and a tagset representing various attributes of stories. To identify the story attributes, we propose a hierarchical representation of narratives that improves over the traditional feature-based machine learning methods as well as sequential representation approaches. Finally, we demonstrate a multi-view method for discovering story attributes from user opinions in reviews that are complementary to the gold standard data set.

Viaarxiv icon