Alert button
Picture for Nedjma Ousidhoum

Nedjma Ousidhoum

Alert button

SemEval-2023 Task 12: Sentiment Analysis for African Languages (AfriSenti-SemEval)

May 01, 2023
Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Seid Muhie Yimam, David Ifeoluwa Adelani, Ibrahim Sa'id Ahmad, Nedjma Ousidhoum, Abinew Ayele, Saif M. Mohammad, Meriem Beloucif, Sebastian Ruder

Figure 1 for SemEval-2023 Task 12: Sentiment Analysis for African Languages (AfriSenti-SemEval)
Figure 2 for SemEval-2023 Task 12: Sentiment Analysis for African Languages (AfriSenti-SemEval)
Figure 3 for SemEval-2023 Task 12: Sentiment Analysis for African Languages (AfriSenti-SemEval)
Figure 4 for SemEval-2023 Task 12: Sentiment Analysis for African Languages (AfriSenti-SemEval)

We present the first Africentric SemEval Shared task, Sentiment Analysis for African Languages (AfriSenti-SemEval) - The dataset is available at https://github.com/afrisenti-semeval/afrisent-semeval-2023. AfriSenti-SemEval is a sentiment classification challenge in 14 African languages: Amharic, Algerian Arabic, Hausa, Igbo, Kinyarwanda, Moroccan Arabic, Mozambican Portuguese, Nigerian Pidgin, Oromo, Swahili, Tigrinya, Twi, Xitsonga, and Yor\`ub\'a (Muhammad et al., 2023), using data labeled with 3 sentiment classes. We present three subtasks: (1) Task A: monolingual classification, which received 44 submissions; (2) Task B: multilingual classification, which received 32 submissions; and (3) Task C: zero-shot classification, which received 34 submissions. The best performance for tasks A and B was achieved by NLNDE team with 71.31 and 75.06 weighted F1, respectively. UCAS-IIE-NLP achieved the best average score for task C with 58.15 weighted F1. We describe the various approaches adopted by the top 10 systems and their approaches.

* 19 pages, 5 figures, 6 tables 
Viaarxiv icon

The Intended Uses of Automated Fact-Checking Artefacts: Why, How and Who

Apr 27, 2023
Michael Schlichtkrull, Nedjma Ousidhoum, Andreas Vlachos

Figure 1 for The Intended Uses of Automated Fact-Checking Artefacts: Why, How and Who
Figure 2 for The Intended Uses of Automated Fact-Checking Artefacts: Why, How and Who
Figure 3 for The Intended Uses of Automated Fact-Checking Artefacts: Why, How and Who
Figure 4 for The Intended Uses of Automated Fact-Checking Artefacts: Why, How and Who

Automated fact-checking is often presented as an epistemic tool that fact-checkers, social media consumers, and other stakeholders can use to fight misinformation. Nevertheless, few papers thoroughly discuss how. We document this by analysing 100 highly-cited papers, and annotating epistemic elements related to intended use, i.e., means, ends, and stakeholders. We find that narratives leaving out some of these aspects are common, that many papers propose inconsistent means and ends, and that the feasibility of suggested strategies rarely has empirical backing. We argue that this vagueness actively hinders the technology from reaching its goals, as it encourages overclaiming, limits criticism, and prevents stakeholder feedback. Accordingly, we provide several recommendations for thinking and writing about the use of fact-checking artefacts.

Viaarxiv icon

AfriSenti: A Twitter Sentiment Analysis Benchmark for African Languages

Feb 17, 2023
Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Abinew Ali Ayele, Nedjma Ousidhoum, David Ifeoluwa Adelani, Seid Muhie Yimam, Ibrahim Sa'id Ahmad, Meriem Beloucif, Saif Mohammad, Sebastian Ruder, Oumaima Hourrane, Pavel Brazdil, Felermino Dário Mário António Ali, Davis Davis, Salomey Osei, Bello Shehu Bello, Falalu Ibrahim, Tajuddeen Gwadabe, Samuel Rutunda, Tadesse Belay, Wendimu Baye Messelle, Hailu Beshada Balcha, Sisay Adugna Chala, Hagos Tesfahun Gebremichael, Bernard Opoku, Steven Arthur

Figure 1 for AfriSenti: A Twitter Sentiment Analysis Benchmark for African Languages
Figure 2 for AfriSenti: A Twitter Sentiment Analysis Benchmark for African Languages
Figure 3 for AfriSenti: A Twitter Sentiment Analysis Benchmark for African Languages
Figure 4 for AfriSenti: A Twitter Sentiment Analysis Benchmark for African Languages

Africa is home to over 2000 languages from over six language families and has the highest linguistic diversity among all continents. This includes 75 languages with at least one million speakers each. Yet, there is little NLP research conducted on African languages. Crucial in enabling such research is the availability of high-quality annotated datasets. In this paper, we introduce AfriSenti, which consists of 14 sentiment datasets of 110,000+ tweets in 14 African languages (Amharic, Algerian Arabic, Hausa, Igbo, Kinyarwanda, Moroccan Arabic, Mozambican Portuguese, Nigerian Pidgin, Oromo, Swahili, Tigrinya, Twi, Xitsonga, and Yor\`ub\'a) from four language families annotated by native speakers. The data is used in SemEval 2023 Task 12, the first Afro-centric SemEval shared task. We describe the data collection methodology, annotation process, and related challenges when curating each of the datasets. We conduct experiments with different sentiment classification baselines and discuss their usefulness. We hope AfriSenti enables new work on under-represented languages. The dataset is available at https://github.com/afrisenti-semeval/afrisent-semeval-2023 and can also be loaded as a huggingface datasets (https://huggingface.co/datasets/shmuhammad/AfriSenti).

* 15 pages, 6 Figures, 9 Tables 
Viaarxiv icon

Multilingual and Multi-Aspect Hate Speech Analysis

Aug 29, 2019
Nedjma Ousidhoum, Zizheng Lin, Hongming Zhang, Yangqiu Song, Dit-Yan Yeung

Figure 1 for Multilingual and Multi-Aspect Hate Speech Analysis
Figure 2 for Multilingual and Multi-Aspect Hate Speech Analysis
Figure 3 for Multilingual and Multi-Aspect Hate Speech Analysis
Figure 4 for Multilingual and Multi-Aspect Hate Speech Analysis

Current research on hate speech analysis is typically oriented towards monolingual and single classification tasks. In this paper, we present a new multilingual multi-aspect hate speech analysis dataset and use it to test the current state-of-the-art multilingual multitask learning approaches. We evaluate our dataset in various classification settings, then we discuss how to leverage our annotations in order to improve hate speech detection and classification in general.

Viaarxiv icon