Abstract:One of the most alarming issues in digital society is hate speech (HS) on social media. The severity is so high that researchers across the globe are captivated by this domain. A notable amount of work has been conducted to address the identification and alarm system. However, a noticeable gap exists, especially for low-resource languages. Comprehensive datasets are the main problem among the constrained resource languages, such as Bangla. Interestingly, hate speech or any particular speech has no single dimensionality. Similarly, the hate component can simultaneously have multiple abusive attributes, which seems to be missed in the existing datasets. Thus, a multi-label Bangla hate speech dataset named BOISHOMMO has been compiled and evaluated in this work. That includes categories of HS across race, gender, religion, politics, and more. With over two thousand annotated examples, BOISHOMMO provides a nuanced understanding of hate speech in Bangla and highlights the complexities of processing non-Latin scripts. Apart from evaluating with multiple algorithmic approaches, it also highlights the complexities of processing Bangla text and assesses model performance. This unique multi-label approach enriches future hate speech detection and analysis studies for low-resource languages by providing a more nuanced, diverse dataset.
Abstract:In recent years, Sentiment Analysis (SA) and Emotion Recognition (ER) have been increasingly popular in the Bangla language, which is the seventh most spoken language throughout the entire world. However, the language is structurally complicated, which makes this field arduous to extract emotions in an accurate manner. Several distinct approaches such as the extraction of positive and negative sentiments as well as multiclass emotions, have been implemented in this field of study. Nevertheless, the extraction of multiple sentiments is an almost untouched area in this language. Which involves identifying several feelings based on a single piece of text. Therefore, this study demonstrates a thorough method for constructing an annotated corpus based on scrapped data from Facebook to bridge the gaps in this subject area to overcome the challenges. To make this annotation more fruitful, the context-based approach has been used. Bidirectional Encoder Representations from Transformers (BERT), a well-known methodology of transformers, have been shown the best results of all methods implemented. Finally, a web application has been developed to demonstrate the performance of the pre-trained top-performer model (BERT) for multi-label ER in Bangla.