Alert button
Picture for Hasan Cavusoglu

Hasan Cavusoglu

Alert button

Machine Generation and Detection of Arabic Manipulated and Fake News

Nov 05, 2020
El Moatez Billah Nagoudi, AbdelRahim Elmadany, Muhammad Abdul-Mageed, Tariq Alhindi, Hasan Cavusoglu

Figure 1 for Machine Generation and Detection of Arabic Manipulated and Fake News
Figure 2 for Machine Generation and Detection of Arabic Manipulated and Fake News
Figure 3 for Machine Generation and Detection of Arabic Manipulated and Fake News
Figure 4 for Machine Generation and Detection of Arabic Manipulated and Fake News

Fake news and deceptive machine-generated text are serious problems threatening modern societies, including in the Arab world. This motivates work on detecting false and manipulated stories online. However, a bottleneck for this research is lack of sufficient data to train detection models. We present a novel method for automatically generating Arabic manipulated (and potentially fake) news stories. Our method is simple and only depends on availability of true stories, which are abundant online, and a part of speech tagger (POS). To facilitate future work, we dispense with both of these requirements altogether by providing AraNews, a novel and large POS-tagged news dataset that can be used off-the-shelf. Using stories generated based on AraNews, we carry out a human annotation study that casts light on the effects of machine manipulation on text veracity. The study also measures human ability to detect Arabic machine manipulated text generated by our method. Finally, we develop the first models for detecting manipulated Arabic news and achieve state-of-the-art results on Arabic fake news detection (macro F1=70.06). Our models and data are publicly available.

* 10 pages, accepted in The Fifth Arabic Natural Language Processing Workshop (WANLP 2020) 
Viaarxiv icon

Growing Together: Modeling Human Language Learning With n-Best Multi-Checkpoint Machine Translation

Jun 07, 2020
El Moatez Billah Nagoudi, Muhammad Abdul-Mageed, Hasan Cavusoglu

Figure 1 for Growing Together: Modeling Human Language Learning With n-Best Multi-Checkpoint Machine Translation
Figure 2 for Growing Together: Modeling Human Language Learning With n-Best Multi-Checkpoint Machine Translation
Figure 3 for Growing Together: Modeling Human Language Learning With n-Best Multi-Checkpoint Machine Translation
Figure 4 for Growing Together: Modeling Human Language Learning With n-Best Multi-Checkpoint Machine Translation

We describe our submission to the 2020 Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE) (Mayhew et al., 2020). We view MT models at various training stages (i.e., checkpoints) as human learners at different levels. Hence, we employ an ensemble of multi-checkpoints from the same model to generate translation sequences with various levels of fluency. From each checkpoint, for our best model, we sample n-Best sequences (n=10) with a beam width =100. We achieve 37.57 macro F1 with a 6 checkpoint model ensemble on the official English to Portuguese shared task test data, outperforming a baseline Amazon translation system of 21.30 macro F1 and ultimately demonstrating the utility of our intuitive method.

* Accepted to the 4th Workshop on Neural Generation and Translation (Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education Mayhew et al., 2020) collocated with ACL 2020 
Viaarxiv icon