CRNL, CRNL-COPHY
Abstract:Electroencephalography (EEG) signal cleaning has long been a critical challenge in the research community. The presence of artifacts can significantly degrade EEG data quality, complicating analysis and potentially leading to erroneous interpretations. While various artifact rejection methods have been proposed, the gold standard remains manual visual inspection by human experts-a process that is time-consuming, subjective, and impractical for large-scale EEG studies. Existing techniques are often hindered by a strong reliance on manual hyperparameter tuning, sensitivity to outliers, and high computational costs. In this paper, we introduce the improved Riemannian Potato Field (iRPF), a fast and fully automated method for EEG artifact rejection that addresses key limitations of current approaches. We evaluate iRPF against several state-of-the-art artifact rejection methods, using two publicly available EEG databases, labeled for various artifact types, comprising 226 EEG recordings. Our results demonstrate that iRPF outperforms all competitors across multiple metrics, with gains of up to 22% in recall, 102% in specificity, 54% in precision, and 24% in F1-score, compared to Isolation Forest, Autoreject, Riemannian Potato, and Riemannian Potato Field, respectively. Statistical analysis confirmed the significance of these improvements (p < 0.001) with large effect sizes (Cohen's d > 0.8) in most comparisons. Additionally, on a typical EEG recording iRPF performs artifact cleaning in under 8 milliseconds per epoch using a standard laptop, highlighting its efficiency for large-scale EEG data processing and real-time applications. iRPF offers a robust and data-driven artifact rejection solution for high-quality EEG pre-processing in brain-computer interfaces and clinical neuroimaging applications.
Abstract:Neurofeedback training (NFT) aims to teach self-regulation of brain activity through real-time feedback, but suffers from highly variable outcomes and poorly understood mechanisms, hampering its validation. To address these issues, we propose a formal computational model of the NFT closed loop. Using Active Inference, a Bayesian framework modelling perception, action, and learning, we simulate agents interacting with an NFT environment. This enables us to test the impact of design choices (e.g., feedback quality, biomarker validity) and subject factors (e.g., prior beliefs) on training. Simulations show that training effectiveness is sensitive to feedback noise or bias, and to prior beliefs (highlighting the importance of guiding instructions), but also reveal that perfect feedback is insufficient to guarantee high performance. This approach provides a tool for assessing and predicting NFT variability, interpret empirical data, and potentially develop personalized training protocols.