The field of e-learning has emerged as a topic of interest in academia due to the increased ease of accessing the Internet using using smart-phones and wireless devices. One of the challenges facing e-learning platforms is how to keep students motivated and engaged. Moreover, it is also crucial to identify the students that might need help in order to make sure their academic performance doesn't suffer. To that end, this paper tries to investigate the relationship between student engagement and their academic performance. Apriori association rules algorithm is used to derive a set of rules that relate student engagement to academic performance. Experimental results' analysis done using confidence and lift metrics show that a positive correlation exists between students' engagement level and their academic performance in a blended e-learning environment. In particular, it is shown that higher engagement often leads to better academic performance. This cements the previous work that linked engagement and academic performance in traditional classrooms.
Domain Name System (DNS) is a crucial component of current IP-based networks as it is the standard mechanism for name to IP resolution. However, due to its lack of data integrity and origin authentication processes, it is vulnerable to a variety of attacks. One such attack is Typosquatting. Detecting this attack is particularly important as it can be a threat to corporate secrets and can be used to steal information or commit fraud. In this paper, a machine learning-based approach is proposed to tackle the typosquatting vulnerability. To that end, exploratory data analytics is first used to better understand the trends observed in eight domain name-based extracted features. Furthermore, a majority voting-based ensemble learning classifier built using five classification algorithms is proposed that can detect suspicious domains with high accuracy. Moreover, the observed trends are validated by studying the same features in an unlabeled dataset using K-means clustering algorithm and through applying the developed ensemble learning classifier. Results show that legitimate domains have a smaller domain name length and fewer unique characters. Moreover, the developed ensemble learning classifier performs better in terms of accuracy, precision, and F-score. Furthermore, it is shown that similar trends are observed when clustering is used. However, the number of domains identified as potentially suspicious is high. Hence, the ensemble learning classifier is applied with results showing that the number of domains identified as potentially suspicious is reduced by almost a factor of five while still maintaining the same trends in terms of features' statistics.
The increased reliance on the Internet and the corresponding surge in connectivity demand has led to a significant growth in Internet-of-Things (IoT) devices. The continued deployment of IoT devices has in turn led to an increase in network attacks due to the larger number of potential attack surfaces as illustrated by the recent reports that IoT malware attacks increased by 215.7% from 10.3 million in 2017 to 32.7 million in 2018. This illustrates the increased vulnerability and susceptibility of IoT devices and networks. Therefore, there is a need for proper effective and efficient attack detection and mitigation techniques in such environments. Machine learning (ML) has emerged as one potential solution due to the abundance of data generated and available for IoT devices and networks. Hence, they have significant potential to be adopted for intrusion detection for IoT environments. To that end, this paper proposes an optimized ML-based framework consisting of a combination of Bayesian optimization Gaussian Process (BO-GP) algorithm and decision tree (DT) classification model to detect attacks on IoT devices in an effective and efficient manner. The performance of the proposed framework is evaluated using the Bot-IoT-2018 dataset. Experimental results show that the proposed optimized framework has a high detection accuracy, precision, recall, and F-score, highlighting its effectiveness and robustness for the detection of botnet attacks in IoT environments.
The Domain Name System (DNS) protocol plays a major role in today's Internet as it translates between website names and corresponding IP addresses. However, due to the lack of processes for data integrity and origin authentication, the DNS protocol has several security vulnerabilities. This often leads to a variety of cyber-attacks, including botnet network attacks. One promising solution to detect DNS-based botnet attacks is adopting machine learning (ML) based solutions. To that end, this paper proposes a novel optimized ML-based framework to detect botnets based on their corresponding DNS queries. More specifically, the framework consists of using information gain as a feature selection method and genetic algorithm (GA) as a hyper-parameter optimization model to tune the parameters of a random forest (RF) classifier. The proposed framework is evaluated using a state-of-the-art TI-2016 DNS dataset. Experimental results show that the proposed optimized framework reduced the feature set size by up to 60%. Moreover, it achieved a high detection accuracy, precision, recall, and F-score compared to the default classifier. This highlights the effectiveness and robustness of the proposed framework in detecting botnet attacks.
Convolutional Neural Network (CNN) techniques have proven to be very useful in image-based anomaly detection applications. CNN can be used as deep features extractor where other anomaly detection techniques are applied on these features. For this scenario, using transfer learning is common since pretrained models provide deep feature representations that are useful for anomaly detection tasks. Consequentially, anomaly can be detected by applying similarly measure between extracted features and a defined model of normality. A key factor in such approaches is the decision threshold used for detecting anomaly. While most of the proposed methods focus on the approach itself, slight attention has been paid to address decision threshold settings. In this paper, we tackle this problem and propose a welldefined method to set the working-point decision threshold that improves detection accuracy. We introduce a transfer learning framework for anomaly detection based on similarity measure with a Model of Normality (MoN) and show that with the proposed threshold settings, a significant performance improvement can be achieved. Moreover, the framework has low complexity with relaxed computational requirements.
Surface anomaly detection plays an important quality control role in many manufacturing industries to reduce scrap production. Machine-based visual inspections have been utilized in recent years to conduct this task instead of human experts. In particular, deep learning Convolutional Neural Networks (CNNs) have been at the forefront of these image processing-based solutions due to their predictive accuracy and efficiency. Training a CNN on a classification objective requires a sufficiently large amount of defective data, which is often not available. In this paper, we address that challenge by training the CNN on surface texture patches with a distance-based anomaly detection objective instead. A deep residual-based triplet network model is utilized, and defective training samples are synthesized exclusively from non-defective samples via random erasing techniques to directly learn a similarity metric between the same-class samples and out-of-class samples. Evaluation results demonstrate the approach's strength in detecting different types of anomalies, such as bent, broken, or cracked surfaces, for known surfaces that are part of the training data and unseen novel surfaces.
The growth in wireless broadband users, devices, and novel applications has led to a significant increase in the demand for new radio frequency spectrum. This is expected to grow even further given the projection that the global traffic per year will reach 4.8 zettabytes by 2022. Moreover, it is projected that the number of Internet users will reach 4.8 billion and the number of connected devices will be close 28.5 billion devices. However, due to the spectrum being mostly allocated and divided, providing more spectrum to expand existing services or offer new ones has become more challenging. To address this, spectrum sharing has been proposed as a potential solution to improve spectrum utilization efficiency. Adopting effective and efficient spectrum sharing mechanisms is in itself a challenging task given the multitude of levels and techniques that can be integrated to enable it. To that end, this paper provides an overview of the different spectrum sharing levels and techniques that have been proposed in the literature. Moreover, it discusses the potential of adopting dynamic sharing mechanisms by offering Spectrum-as-a-Service architecture. Furthermore, it describes the potential role of machine learning models in facilitating the automated and efficient dynamic sharing of the spectrum and offering Spectrum-as-a-Service.
Cyber-security garnered significant attention due to the increased dependency of individuals and organizations on the Internet and their concern about the security and privacy of their online activities. Several previous machine learning (ML)-based network intrusion detection systems (NIDSs) have been developed to protect against malicious online behavior. This paper proposes a novel multi-stage optimized ML-based NIDS framework that reduces computational complexity while maintaining its detection performance. This work studies the impact of oversampling techniques on the models' training sample size and determines the minimal suitable training sample size. Furthermore, it compares between two feature selection techniques, information gain and correlation-based, and explores their effect on detection performance and time complexity. Moreover, different ML hyper-parameter optimization techniques are investigated to enhance the NIDS's performance. The performance of the proposed framework is evaluated using two recent intrusion detection datasets, the CICIDS 2017 and the UNSW-NB 2015 datasets. Experimental results show that the proposed model significantly reduces the required training sample size (up to 74%) and feature set size (up to 50%). Moreover, the model performance is enhanced with hyper-parameter optimization with detection accuracies over 99% for both datasets, outperforming recent literature works by 1-2% higher accuracy and 1-2% lower false alarm rate.
Machine learning algorithms have been used widely in various applications and areas. To fit a machine learning model into different problems, its hyper-parameters must be tuned. Selecting the best hyper-parameter configuration for machine learning models has a direct impact on the model's performance. It often requires deep knowledge of machine learning algorithms and appropriate hyper-parameter optimization techniques. Although several automatic optimization techniques exist, they have different strengths and drawbacks when applied to different types of problems. In this paper, optimizing the hyper-parameters of common machine learning models is studied. We introduce several state-of-the-art optimization techniques and discuss how to apply them to machine learning algorithms. Many available libraries and frameworks developed for hyper-parameter optimization problems are provided, and some open challenges of hyper-parameter optimization research are also discussed in this paper. Moreover, experiments are conducted on benchmark datasets to compare the performance of different optimization methods and provide practical examples of hyper-parameter optimization. This survey paper will help industrial users, data analysts, and researchers to better develop machine learning models by identifying the proper hyper-parameter configurations effectively.
Network attacks have been very prevalent as their rate is growing tremendously. Both organization and individuals are now concerned about their confidentiality, integrity and availability of their critical information which are often impacted by network attacks. To that end, several previous machine learning-based intrusion detection methods have been developed to secure network infrastructure from such attacks. In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique to tune the parameters of Support Vector Machine with Gaussian Kernel (SVM-RBF), Random Forest (RF), and k-Nearest Neighbor (k-NN) algorithms. The performance of the considered algorithms is evaluated using the ISCX 2012 dataset. Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.