Abstract:Deep learning is significantly advancing the analysis of electroencephalography (EEG) data by effectively discovering highly nonlinear patterns within the signals. Data partitioning and cross-validation are crucial for assessing model performance and ensuring study comparability, as they can produce varied results and data leakage due to specific signal properties (e.g., biometric). Such variability leads to incomparable studies and, increasingly, overestimated performance claims, which are detrimental to the field. Nevertheless, no comprehensive guidelines for proper data partitioning and cross-validation exist in the domain, nor is there a quantitative evaluation of their impact on model accuracy, reliability, and generalizability. To assist researchers in identifying optimal experimental strategies, this paper thoroughly investigates the role of data partitioning and cross-validation in evaluating EEG deep learning models. Five cross-validation settings are compared across three supervised cross-subject classification tasks (BCI, Parkinson's, and Alzheimer's disease detection) and four established architectures of increasing complexity (ShallowConvNet, EEGNet, DeepConvNet, and Temporal-based ResNet). The comparison of over 100,000 trained models underscores, first, the importance of using subject-based cross-validation strategies for evaluating EEG deep learning models, except when within-subject analyses are acceptable (e.g., BCI). Second, it highlights the greater reliability of nested approaches (N-LNSO) compared to non-nested counterparts, which are prone to data leakage and favor larger models overfitting to validation data. In conclusion, this work provides EEG deep learning researchers with an analysis of data partitioning and cross-validation and offers guidelines to avoid data leakage, currently undermining the domain with potentially overestimated performance claims.
Abstract:The last decade has witnessed a notable surge in deep learning applications for the analysis of electroencephalography (EEG) data, thanks to its demonstrated superiority over conventional statistical techniques. However, even deep learning models can underperform if trained with bad processed data. While preprocessing is essential to the analysis of EEG data, there is a need of research examining its precise impact on model performance. This causes uncertainty about whether and to what extent EEG data should be preprocessed in a deep learning scenario. This study aims at investigating the role of EEG preprocessing in deep learning applications, drafting guidelines for future research. It evaluates the impact of different levels of preprocessing, from raw and minimally filtered data to complex pipelines with automated artifact removal algorithms. Six classification tasks (eye blinking, motor imagery, Parkinson's and Alzheimer's disease, sleep deprivation, and first episode psychosis) and four different architectures commonly used in the EEG domain were considered for the evaluation. The analysis of 4800 different trainings revealed statistical differences between the preprocessing pipelines at the intra-task level, for each of the investigated models, and at the inter-task level, for the largest one. Raw data generally leads to underperforming models, always ranking last in averaged score. In addition, models seem to benefit more from minimal pipelines without artifact handling methods, suggesting that EEG artifacts may contribute to the performance of deep neural networks.