Text-based communication is highly favoured as a communication method, especially in business environments. As a result, it is often abused by sending malicious messages, e.g., spam emails, to deceive users into relaying personal information, including online accounts credentials or banking details. For this reason, many machine learning methods for text classification have been proposed and incorporated into the services of most email providers. However, optimising text classification algorithms and finding the right tradeoff on their aggressiveness is still a major research problem. We present an updated survey of 12 machine learning text classifiers applied to a public spam corpus. A new pipeline is proposed to optimise hyperparameter selection and improve the models' performance by applying specific methods (based on natural language processing) in the preprocessing stage. Our study aims to provide a new methodology to investigate and optimise the effect of different feature sizes and hyperparameters in machine learning classifiers that are widely used in text classification problems. The classifiers are tested and evaluated on different metrics including F-score (accuracy), precision, recall, and run time. By analysing all these aspects, we show how the proposed pipeline can be used to achieve a good accuracy towards spam filtering on the Enron dataset, a widely used public email corpus. Statistical tests and explainability techniques are applied to provide a robust analysis of the proposed pipeline and interpret the classification outcomes of the 12 machine learning models, also identifying words that drive the classification results. Our analysis shows that it is possible to identify an effective machine learning model to classify the Enron dataset with an F-score of 94%.
In this paper, we evaluate the performance of multiple machine-learning methods in the emulation of agent-based models (ABMs). ABMs are a popular methodology for modelling complex systems composed of multiple interacting processes. The analysis of ABM outputs is often not straightforward, as the relationships between input parameters can be non-linear or even chaotic, and each individual model run can require significant CPU time. Statistical emulation, in which a statistical model of the ABM is constructed to allow for more in-depth model analysis, has proven valuable for some applications. Here we compare multiple machine-learning methods for ABM emulation in order to determine the approaches best-suited to replicating the complex and non-linear behaviour of ABMs. Our results suggest that, in most scenarios, artificial neural networks (ANNs) and support vector machines outperform Gaussian process emulators, currently the most commonly used method for the emulation of complex computational models. ANNs produced the most accurate model replications in scenarios with high numbers of model runs, although training times for these emulators were considerably longer than for any other method. We propose that users of complex ABMs would benefit from using machine-learning methods for emulation, as this can facilitate more robust sensitivity analyses for their models as well as reducing CPU time consumption when calibrating and analysing the simulation.