Alert button
Picture for Lama Alkhaled

Lama Alkhaled

Alert button

Data Bias According to Bipol: Men are Naturally Right and It is the Role of Women to Follow Their Lead

Add code
Bookmark button
Alert button
Apr 07, 2024
Irene Pagliai, Goya van Boven, Tosin Adewumi, Lama Alkhaled, Namrata Gurung, Isabella Södergren, Elisa Barney

Viaarxiv icon

On the Limitations of Large Language Models (LLMs): False Attribution

Add code
Bookmark button
Alert button
Apr 06, 2024
Tosin Adewumi, Nudrat Habib, Lama Alkhaled, Elisa Barney

Viaarxiv icon

Vehicle Detection Performance in Nordic Region

Add code
Bookmark button
Alert button
Mar 22, 2024
Hamam Mokayed, Rajkumar Saini, Oluwatosin Adewumi, Lama Alkhaled, Bjorn Backe, Palaiahnakote Shivakumara, Olle Hagner, Yan Chai Hum

Viaarxiv icon

Instruction Makes a Difference

Add code
Bookmark button
Alert button
Feb 01, 2024
Tosin Adewumi, Nudrat Habib, Lama Alkhaled, Elisa Barney

Viaarxiv icon

ProCoT: Stimulating Critical Thinking and Writing of Students through Engagement with Large Language Models (LLMs)

Add code
Bookmark button
Alert button
Dec 15, 2023
Tosin Adewumi, Lama Alkhaled, Claudia Buck, Sergio Hernandez, Saga Brilioth, Mkpe Kekung, Yelvin Ragimov, Elisa Barney

Viaarxiv icon

AfriMTE and AfriCOMET: Empowering COMET to Embrace Under-resourced African Languages

Add code
Bookmark button
Alert button
Nov 16, 2023
Jiayi Wang, David Ifeoluwa Adelani, Sweta Agrawal, Ricardo Rei, Eleftheria Briakou, Marine Carpuat, Marek Masiak, Xuanli He, Sofia Bourhim, Andiswa Bukula, Muhidin Mohamed, Temitayo Olatoye, Hamam Mokayede, Christine Mwase, Wangui Kimotho, Foutse Yuehgoh, Anuoluwapo Aremu, Jessica Ojo, Shamsuddeen Hassan Muhammad, Salomey Osei, Abdul-Hakeem Omotayo, Chiamaka Chukwuneke, Perez Ogayo, Oumaima Hourrane, Salma El Anigri, Lolwethu Ndolela, Thabiso Mangwana, Shafie Abdi Mohamed, Ayinde Hassan, Oluwabusayo Olufunke Awoyomi, Lama Alkhaled, Sana Al-Azzawi, Naome A. Etori, Millicent Ochieng, Clemencia Siro, Samuel Njoroge, Eric Muchiri, Wangari Kimotho, Lyse Naomi Wamba Momo, Daud Abolade, Simbiat Ajao, Tosin Adewumi, Iyanuoluwa Shode, Ricky Macharm, Ruqayya Nasir Iro, Saheed S. Abdullahi, Stephen E. Moore, Bernard Opoku, Zainab Akinjobi, Abeeb Afolabi, Nnaemeka Obiefuna, Onyekachi Raphael Ogbu, Sam Brian, Verrah Akinyi Otiende, Chinedu Emmanuel Mbonu, Sakayo Toadoum Sari, Pontus Stenetorp

Viaarxiv icon

Robust and Fast Vehicle Detection using Augmented Confidence Map

Add code
Bookmark button
Alert button
Apr 27, 2023
Hamam Mokayed, Palaiahnakote Shivakumara, Lama Alkhaled, Rajkumar Saini, Muhammad Zeshan Afzal, Yan Chai Hum, Marcus Liwicki

Figure 1 for Robust and Fast Vehicle Detection using Augmented Confidence Map
Figure 2 for Robust and Fast Vehicle Detection using Augmented Confidence Map
Figure 3 for Robust and Fast Vehicle Detection using Augmented Confidence Map
Figure 4 for Robust and Fast Vehicle Detection using Augmented Confidence Map
Viaarxiv icon

Bipol: A Novel Multi-Axes Bias Evaluation Metric with Explainability for NLP

Add code
Bookmark button
Alert button
Apr 08, 2023
Lama Alkhaled, Tosin Adewumi, Sana Sabah Sabry

Figure 1 for Bipol: A Novel Multi-Axes Bias Evaluation Metric with Explainability for NLP
Figure 2 for Bipol: A Novel Multi-Axes Bias Evaluation Metric with Explainability for NLP
Figure 3 for Bipol: A Novel Multi-Axes Bias Evaluation Metric with Explainability for NLP
Figure 4 for Bipol: A Novel Multi-Axes Bias Evaluation Metric with Explainability for NLP
Viaarxiv icon

Bipol: Multi-axes Evaluation of Bias with Explainability in Benchmark Datasets

Add code
Bookmark button
Alert button
Jan 28, 2023
Tosin Adewumi, Isabella Södergren, Lama Alkhaled, Sana Sabah Sabry, Foteini Liwicki, Marcus Liwicki

Figure 1 for Bipol: Multi-axes Evaluation of Bias with Explainability in Benchmark Datasets
Figure 2 for Bipol: Multi-axes Evaluation of Bias with Explainability in Benchmark Datasets
Figure 3 for Bipol: Multi-axes Evaluation of Bias with Explainability in Benchmark Datasets
Figure 4 for Bipol: Multi-axes Evaluation of Bias with Explainability in Benchmark Datasets
Viaarxiv icon

ML_LTU at SemEval-2022 Task 4: T5 Towards Identifying Patronizing and Condescending Language

Add code
Bookmark button
Alert button
Apr 15, 2022
Tosin Adewumi, Lama Alkhaled, Hamam Alkhaled, Foteini Liwicki, Marcus Liwicki

Figure 1 for ML_LTU at SemEval-2022 Task 4: T5 Towards Identifying Patronizing and Condescending Language
Figure 2 for ML_LTU at SemEval-2022 Task 4: T5 Towards Identifying Patronizing and Condescending Language
Figure 3 for ML_LTU at SemEval-2022 Task 4: T5 Towards Identifying Patronizing and Condescending Language
Figure 4 for ML_LTU at SemEval-2022 Task 4: T5 Towards Identifying Patronizing and Condescending Language
Viaarxiv icon