Abstract:This paper presents a novel wireless silent speech interface (SSI) integrating multi-channel textile-based EMG electrodes into headphone earmuff for real-time, hands-free communication. Unlike conventional patch-based EMG systems, which require large-area electrodes on the face or neck, our approach ensures comfort, discretion, and wearability while maintaining robust silent speech decoding. The system utilizes four graphene/PEDOT:PSS-coated textile electrodes to capture speech-related neuromuscular activity, with signals processed via a compact ESP32-S3-based wireless readout module. To address the challenge of variable skin-electrode coupling, we propose a 1D SE-ResNet architecture incorporating squeeze-and-excitation (SE) blocks to dynamically adjust per-channel attention weights, enhancing robustness against motion-induced impedance variations. The proposed system achieves 96% accuracy on 10 commonly used voice-free control words, outperforming conventional single-channel and non-adaptive baselines. Experimental validation, including XAI-based attention analysis and t-SNE feature visualization, confirms the adaptive channel selection capability and effective feature extraction of the model. This work advances wearable EMG-based SSIs, demonstrating a scalable, low-power, and user-friendly platform for silent communication, assistive technologies, and human-computer interaction.
Abstract:Wearable biosensors have revolutionized human performance monitoring by enabling real-time assessment of physiological and biomechanical parameters. However, existing solutions lack the ability to simultaneously capture breath-force coordination and muscle activation symmetry in a seamless and non-invasive manner, limiting their applicability in strength training and rehabilitation. This work presents a wearable smart sportswear system that integrates screen-printed graphene-based strain sensors with a wireless deep learning framework for real-time classification of exercise execution quality. By leveraging 1D ResNet-18 for feature extraction, the system achieves 92.3% classification accuracy across six exercise conditions, distinguishing between breathing irregularities and asymmetric muscle exertion. Additionally, t-SNE analysis and Grad-CAM-based explainability visualization confirm that the network accurately captures biomechanically relevant features, ensuring robust interpretability. The proposed system establishes a foundation for next-generation AI-powered sportswear, with applications in fitness optimization, injury prevention, and adaptive rehabilitation training.
Abstract:The cardiac dipole has been shown to propagate to the ears, now a common site for consumer wearable electronics, enabling the recording of electrocardiogram (ECG) signals. However, in-ear ECG recordings often suffer from significant noise due to their small amplitude and the presence of other physiological signals, such as electroencephalogram (EEG), which complicates the extraction of cardiovascular features. This study addresses this issue by developing a denoising convolutional autoencoder (DCAE) to enhance ECG information from in-ear recordings, producing cleaner ECG outputs. The model is evaluated using a dataset of in-ear ECGs and corresponding clean Lead I ECGs from 45 healthy participants. The results demonstrate a substantial improvement in signal-to-noise ratio (SNR), with a median increase of 5.9 dB. Additionally, the model significantly improved heart rate estimation accuracy, reducing the mean absolute error by almost 70% and increasing R-peak detection precision to a median value of 90%. We also trained and validated the model using a synthetic dataset, generated from real ECG signals, including abnormal cardiac morphologies, corrupted by pink noise. The results obtained show effective removal of noise sources with clinically plausible waveform reconstruction ability.
Abstract:Fundus diseases are major causes of visual impairment and blindness worldwide, especially in underdeveloped regions, where the shortage of ophthalmologists hinders timely diagnosis. AI-assisted fundus image analysis has several advantages, such as high accuracy, reduced workload, and improved accessibility, but it requires a large amount of expert-annotated data to build reliable models. To address this dilemma, we propose a general self-supervised machine learning framework that can handle diverse fundus diseases from unlabeled fundus images. Our method's AUC surpasses existing supervised approaches by 15.7%, and even exceeds performance of a single human expert. Furthermore, our model adapts well to various datasets from different regions, races, and heterogeneous image sources or qualities from multiple cameras or devices. Our method offers a label-free general framework to diagnose fundus diseases, which could potentially benefit telehealth programs for early screening of people at risk of vision loss.
Abstract:Machine learning-based fundus image diagnosis technologies trigger worldwide interest owing to their benefits such as reducing medical resource power and providing objective evaluation results. However, current methods are commonly based on supervised methods, bringing in a heavy workload to biomedical staff and hence suffering in expanding effective databases. To address this issue, in this article, we established a label-free method, name 'SSVT',which can automatically analyze un-labeled fundus images and generate high evaluation accuracy of 97.0% of four main eye diseases based on six public datasets and two datasets collected by Beijing Tongren Hospital. The promising results showcased the effectiveness of the proposed unsupervised learning method, and the strong application potential in biomedical resource shortage regions to improve global eye health.
Abstract:Our research presents a wearable Silent Speech Interface (SSI) technology that excels in device comfort, time-energy efficiency, and speech decoding accuracy for real-world use. We developed a biocompatible, durable textile choker with an embedded graphene-based strain sensor, capable of accurately detecting subtle throat movements. This sensor, surpassing other strain sensors in sensitivity by 420%, simplifies signal processing compared to traditional voice recognition methods. Our system uses a computationally efficient neural network, specifically a one-dimensional convolutional neural network with residual structures, to decode speech signals. This network is energy and time-efficient, reducing computational load by 90% while achieving 95.25% accuracy for a 20-word lexicon and swiftly adapting to new users and words with minimal samples. This innovation demonstrates a practical, sensitive, and precise wearable SSI suitable for daily communication applications.