Abstract:Brain-computer interfaces (BCI) in electroencephalography (EEG)-based motor imagery classification offer promising solutions in neurorehabilitation and assistive technologies by enabling communication between the brain and external devices. However, the non-stationary nature of EEG signals and significant inter-subject variability cause substantial challenges for developing robust cross-subject classification models. This paper introduces a novel Spatial-Spectral-Temporal Attention Fusion (SSTAF) Transformer specifically designed for upper-limb motor imagery classification. Our architecture consists of a spectral transformer and a spatial transformer, followed by a transformer block and a classifier network. Each module is integrated with attention mechanisms that dynamically attend to the most discriminative patterns across multiple domains, such as spectral frequencies, spatial electrode locations, and temporal dynamics. The short-time Fourier transform is incorporated to extract features in the time-frequency domain to make it easier for the model to obtain a better feature distinction. We evaluated our SSTAF Transformer model on two publicly available datasets, the EEGMMIDB dataset, and BCI Competition IV-2a. SSTAF Transformer achieves an accuracy of 76.83% and 68.30% in the data sets, respectively, outperforms traditional CNN-based architectures and a few existing transformer-based approaches.
Abstract:Wounds, such as foot ulcers, pressure ulcers, leg ulcers, and infected wounds, come up with substantial problems for healthcare professionals. Prompt and accurate segmentation is crucial for effective treatment. However, contemporary methods need an exhaustive model that is qualified for both classification and segmentation, especially lightweight ones. In this work, we tackle this issue by presenting a new architecture that incorporates U-Net, which is optimized for both wound classification and effective segmentation. We curated four extensive and diverse collections of wound images, utilizing the publicly available Medetec Dataset, and supplemented with additional data sourced from the Internet. Our model performed exceptionally well, with an F1 score of 0.929, a Dice score of 0.931 in segmentation, and an accuracy of 0.915 in classification, proving its effectiveness in both classification and segmentation work. This accomplishment highlights the potential of our approach to automating wound care management.