Abstract:Brain-Computer Interfaces face challenges from neural signal instability and memory constraints for real-time implantable applications. We introduce an online SNN decoder using local three-factor learning rules with dual-timescale eligibility traces that avoid backpropagation through time while maintaining competitive performance. Our approach combines error-modulated Hebbian updates, fast/slow trace consolidation, and adaptive learning rate control, requiring only O(1) memory versus O(T) for BPTT methods. Evaluations on two primate datasets achieve comparable decoding accuracy (Pearson $R \geq 0.63$ Zenodo, $R \geq 0.81$ MC Maze) with 28-35% memory reduction and faster convergence than BPTT-trained SNNs. Closed-loop simulations with synthetic neural populations demonstrate adaptation to neural disruptions and learning from scratch without offline calibration. This work enables memory-efficient, continuously adaptive neural decoding suitable for resource-constrained implantable BCI systems.
Abstract:Current approaches to prosthetic control are limited by their reliance on traditional methods, which lack real-time adaptability and intuitive responsiveness. These limitations are particularly pronounced in assistive technologies designed for individuals with diverse cognitive states and motor intentions. In this paper, we introduce a framework that leverages Reinforcement Learning (RL) with Deep Q-Networks (DQN) for classification tasks. Additionally, we present a preprocessing technique using the Common Spatial Pattern (CSP) for multiclass motor imagery (MI) classification in a One-Versus-The-Rest (OVR) manner. The subsequent 'csp space' transformation retains the temporal dimension of EEG signals, crucial for extracting discriminative features. The integration of DQN with a 1D-CNN-LSTM architecture optimizes the decision-making process in real-time, thereby enhancing the system's adaptability to the user's evolving needs and intentions. We elaborate on the data processing methods for two EEG motor imagery datasets. Our innovative model, RLEEGNet, incorporates a 1D-CNN-LSTM architecture as the Online Q-Network within the DQN, facilitating continuous adaptation and optimization of control strategies through feedback. This mechanism allows the system to learn optimal actions through trial and error, progressively improving its performance. RLEEGNet demonstrates high accuracy in classifying MI-EEG signals, achieving as high as 100% accuracy in MI tasks across both the GigaScience (3-class) and BCI-IV-2a (4-class) datasets. These results highlight the potential of combining DQN with a 1D-CNN-LSTM architecture to significantly enhance the adaptability and responsiveness of BCI systems.