A transhumeral prosthesis restores missing anatomical segments below the shoulder, including the hand. Active prostheses utilize real-valued, continuous sensor data to recognize patient target poses, or goals, and proactively move the artificial limb. Previous studies have examined how well the data collected in stationary poses, without considering the time steps, can help discriminate the goals. In this case study paper, we focus on using time series data from surface electromyography electrodes and kinematic sensors to sequentially recognize patients' goals. Our approach involves transforming the data into discrete events and training an existing process mining-based goal recognition system. Results from data collected in a virtual reality setting with ten subjects demonstrate the effectiveness of our proposed goal recognition approach, which achieves significantly better precision and recall than the state-of-the-art machine learning techniques and is less confident when wrong, which is beneficial when approximating smoother movements of prostheses.
Graph Convolutional Networks (GCNs) can capture non-Euclidean spatial dependence between different brain regions, and the graph pooling operator in GCNs is key to enhancing the representation learning capability and acquiring abnormal brain maps. However, the majority of existing research designs graph pooling operators only from the perspective of nodes while disregarding the original edge features, in a way that not only confines graph pooling application scenarios, but also diminishes its ability to capture critical substructures. In this study, a clustering graph pooling method that first supports multidimensional edge features, called Edge-aware hard clustering graph pooling (EHCPool), is developed. EHCPool proposes the first 'Edge-to-node' score evaluation criterion based on edge features to assess node feature significance. To more effectively capture the critical subgraphs, a novel Iteration n-top strategy is further designed to adaptively learn sparse hard clustering assignments for graphs. Subsequently, an innovative N-E Aggregation strategy is presented to aggregate node and edge feature information in each independent subgraph. The proposed model was evaluated on multi-site brain imaging public datasets and yielded state-of-the-art performance. We believe this method is the first deep learning tool with the potential to probe different types of abnormal functional brain networks from data-driven perspective. Core code is at: https://github.com/swfen/EHCPool.
The available evidence suggests that dynamic functional connectivity (dFC) can capture time-varying abnormalities in brain activity in resting-state cerebral functional magnetic resonance imaging (rs-fMRI) data and has a natural advantage in uncovering mechanisms of abnormal brain activity in schizophrenia(SZ) patients. Hence, an advanced dynamic brain network analysis model called the temporal brain category graph convolutional network (Temporal-BCGCN) was employed. Firstly, a unique dynamic brain network analysis module, DSF-BrainNet, was designed to construct dynamic synchronization features. Subsequently, a revolutionary graph convolution method, TemporalConv, was proposed, based on the synchronous temporal properties of feature. Finally, the first modular abnormal hemispherical lateralization test tool in deep learning based on rs-fMRI data, named CategoryPool, was proposed. This study was validated on COBRE and UCLA datasets and achieved 83.62% and 89.71% average accuracies, respectively, outperforming the baseline model and other state-of-the-art methods. The ablation results also demonstrate the advantages of TemporalConv over the traditional edge feature graph convolution approach and the improvement of CategoryPool over the classical graph pooling approach. Interestingly, this study showed that the lower order perceptual system and higher order network regions in the left hemisphere are more severely dysfunctional than in the right hemisphere in SZ and reaffirms the importance of the left medial superior frontal gyrus in SZ. Our core code is available at: https://github.com/swfen/Temporal-BCGCN.
An optimization method is proposed in this paper for novel deployment of given number of directional landmarks (location and pose) within a given region in the 3-D task space. This new deployment technique is built on the geometric models of both landmarks and the monocular camera. In particular, a new concept of Multiple Coverage Probability (MCP) is defined to characterize the probability of at least n landmarks being covered simultaneously by a camera at a fixed position. The optimization is conducted with respect to the position and pose of the given number of landmarks to maximize MCP through globally exploration of the given 3-D space. By adopting the elimination genetic algorithm, the global optimal solutions can be obtained, which are then applied to improve the convergent performance of the visual observer on SE(3) as a demonstration example. Both simulation and experimental results are presented to validate the effectiveness of the proposed landmark deployment optimization method.
Muscle fatigue is usually defined as a decrease in the ability to produce force. The surface electromyography (sEMG) signals have been widely used to provide information about muscle activities including detecting muscle fatigue by various data-driven techniques such as machine learning and statistical approaches. However, it is well-known that sEMG signals are weak signals (low amplitude of the signals) with a low signal-to-noise ratio, data-driven techniques cannot work well when the quality of the data is poor. In particular, the existing methods are unable to detect muscle fatigue coming from static poses. This work exploits the concept of weak monotonicity, which has been observed in the process of fatigue, to robustly detect muscle fatigue in the presence of measurement noises and human variations. Such a population trend methodology has shown its potential in muscle fatigue detection as demonstrated by the experiment of a static pose.
The steady-state visual evoked potential (SSVEP) is one of the most widely used modalities in brain-computer interfaces (BCIs) due to its many advantages. However, the existence of harmonics and the limited range of responsive frequencies in SSVEP make it challenging to further expand the number of targets without sacrificing other aspects of the interface or putting additional constraints on the system. This paper introduces a novel multi-frequency stimulation method for SSVEP and investigates its potential to effectively and efficiently increase the number of targets presented. The proposed stimulation method, obtained by the superposition of the stimulation signals at different frequencies, is size-efficient, allows single-step target identification, puts no strict constraints on the usable frequency range, can be suited to self-paced BCIs, and does not require specific light sources. In addition to the stimulus frequencies and their harmonics, the evoked SSVEP waveforms include frequencies that are integer linear combinations of the stimulus frequencies. Results of decoding SSVEPs collected from nine subjects using canonical correlation analysis (CCA) with only the frequencies and harmonics as reference, also demonstrate the potential of using such a stimulation paradigm in SSVEP-based BCIs.
In previous work, the authors proposed a data-driven optimisation algorithm for the personalisation of human-prosthetic interfaces, demonstrating the possibility of adapting prosthesis behaviour to its user while the user performs tasks with it. This method requires that the human and the prosthesis personalisation algorithm have same pre-defined objective function. This was previously ensured by providing the human with explicit feedback on what the objective function is. However, constantly displaying this information to the prosthesis user is impractical. Moreover, the method utilised task information in the objective function which may not be available from the wearable sensors typically used in prosthetic applications. In this work, the previous approach is extended to use a prosthesis objective function based on implicit human motor behaviour, which represents able-bodied human motor control and is measureable using wearable sensors. The approach is tested in a hardware implementation of the personalisation algorithm on a prosthetic elbow, where the prosthetic objective function is a function of upper-body compensation, and is measured using wearable IMUs. Experimental results on able-bodied subjects using a supernumerary prosthetic elbow mounted on an elbow orthosis suggest that it is possible to use a prosthesis objective function which is implicit in human behaviour to achieve collaboration without providing explicit feedback to the human, motivating further studies.