Abstract:Simultaneous EEG-fMRI recording combines high temporal and spatial resolution for tracking neural activity. However, its usefulness is greatly limited by artifacts from magnetic resonance (MR), especially gradient artifacts (GA) and ballistocardiogram (BCG) artifacts, which interfere with the EEG signal. To address this issue, we used a denoising autoencoder (DAR), a deep learning framework designed to reduce MR-related artifacts in EEG recordings. Using paired data that includes both artifact-contaminated and MR-corrected EEG from the CWL EEG-fMRI dataset, DAR uses a 1D convolutional autoencoder to learn a direct mapping from noisy to clear signal segments. Compared to traditional artifact removal methods like principal component analysis (PCA), independent component analysis (ICA), average artifact subtraction (AAS), and wavelet thresholding, DAR shows better performance. It achieves a root-mean-squared error (RMSE) of 0.0218 $\pm$ 0.0152, a structural similarity index (SSIM) of 0.8885 $\pm$ 0.0913, and a signal-to-noise ratio (SNR) gain of 14.63 dB. Statistical analysis with paired t-tests confirms that these improvements are significant (p<0.001; Cohen's d>1.2). A leave-one-subject-out (LOSO) cross-validation protocol shows that the model generalizes well, yielding an average RMSE of 0.0635 $\pm$ 0.0110 and an SSIM of 0.6658 $\pm$ 0.0880 across unseen subjects. Additionally, saliency-based visualizations demonstrate that DAR highlights areas with dense artifacts, which makes its decisions easier to interpret. Overall, these results position DAR as a potential and understandable solution for real-time EEG artifact removal in simultaneous EEG-fMRI applications.
Abstract:Mental stress poses a significant public health concern due to its detrimental effects on physical and mental well-being, necessitating the development of continuous stress monitoring tools for wearable devices. Blood volume pulse (BVP) sensors, readily available in many smartwatches, offer a convenient and cost-effective solution for stress monitoring. This study proposes a deep learning approach, a Transpose-Enhanced Autoencoder Network (TEANet), for stress detection using BVP signals. The proposed TEANet model was trained and validated utilizing a self-collected RUET SPML dataset, comprising 19 healthy subjects, and the publicly available wearable stress and affect detection (WESAD) dataset, comprising 15 healthy subjects. It achieves the highest accuracy of 92.51% and 96.94%, F1 scores of 95.03% and 95.95%, and kappa of 0.7915 and 0.9350 for RUET SPML, and WESAD datasets respectively. The proposed TEANet effectively detects mental stress through BVP signals with high accuracy, making it a promising tool for continuous stress monitoring. Furthermore, the proposed model effectively addresses class imbalances and demonstrates high accuracy, underscoring its potential for reliable real-time stress monitoring using wearable devices.