Alert button
Picture for Yang Xia

Yang Xia

Alert button

A Spatial-channel-temporal-fused Attention for Spiking Neural Networks

Sep 22, 2022
Wuque Cai, Hongze Sun, Rui Liu, Yan Cui, Jun Wang, Yang Xia, Dezhong Yao, Daqing Guo

Figure 1 for A Spatial-channel-temporal-fused Attention for Spiking Neural Networks
Figure 2 for A Spatial-channel-temporal-fused Attention for Spiking Neural Networks
Figure 3 for A Spatial-channel-temporal-fused Attention for Spiking Neural Networks
Figure 4 for A Spatial-channel-temporal-fused Attention for Spiking Neural Networks

Spiking neural networks (SNNs) mimic brain computational strategies, and exhibit substantial capabilities in spatiotemporal information processing. As an essential factor for human perception, visual attention refers to the dynamic selection process of salient regions in biological vision systems. Although mechanisms of visual attention have achieved great success in computer vision, they are rarely introduced into SNNs. Inspired by experimental observations on predictive attentional remapping, we here propose a new spatial-channel-temporal-fused attention (SCTFA) module that can guide SNNs to efficiently capture underlying target regions by utilizing historically accumulated spatial-channel information. Through a systematic evaluation on three event stream datasets (DVS Gesture, SL-Animals-DVS and MNIST-DVS), we demonstrate that the SNN with the SCTFA module (SCTFA-SNN) not only significantly outperforms the baseline SNN (BL-SNN) and other two SNN models with degenerated attention modules, but also achieves competitive accuracy with existing state-of-the-art methods. Additionally, our detailed analysis shows that the proposed SCTFA-SNN model has strong robustness to noise and outstanding stability to incomplete data, while maintaining acceptable complexity and efficiency. Overall, these findings indicate that appropriately incorporating cognitive mechanisms of the brain may provide a promising approach to elevate the capability of SNNs.

* 12 pages, 8 figures, 5 tabes; This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible 
Viaarxiv icon

A Synapse-Threshold Synergistic Learning Approach for Spiking Neural Networks

Jun 10, 2022
Hongze Sun, Wuque Cai, Baoxin Yang, Yan Cui, Yang Xia, Dezhong Yao, Daqing Guo

Figure 1 for A Synapse-Threshold Synergistic Learning Approach for Spiking Neural Networks
Figure 2 for A Synapse-Threshold Synergistic Learning Approach for Spiking Neural Networks
Figure 3 for A Synapse-Threshold Synergistic Learning Approach for Spiking Neural Networks
Figure 4 for A Synapse-Threshold Synergistic Learning Approach for Spiking Neural Networks

Spiking neural networks (SNNs) have demonstrated excellent capabilities in various intelligent scenarios. Most existing methods for training SNNs are based on the concept of synaptic plasticity; however, learning in the realistic brain also utilizes intrinsic non-synaptic mechanisms of neurons. The spike threshold of biological neurons is a critical intrinsic neuronal feature that exhibits rich dynamics on a millisecond timescale and has been proposed as an underlying mechanism that facilitates neural information processing. In this study, we develop a novel synergistic learning approach that simultaneously trains synaptic weights and spike thresholds in SNNs. SNNs trained with synapse-threshold synergistic learning (STL-SNNs) achieve significantly higher accuracies on various static and neuromorphic datasets than SNNs trained with two single-learning models of the synaptic learning (SL) and the threshold learning (TL). During training, the synergistic learning approach optimizes neural thresholds, providing the network with stable signal transmission via appropriate firing rates. Further analysis indicates that STL-SNNs are robust to noisy data and exhibit low energy consumption for deep network structures. Additionally, the performance of STL-SNN can be further improved by introducing a generalized joint decision framework (JDF). Overall, our findings indicate that biologically plausible synergies between synaptic and intrinsic non-synaptic mechanisms may provide a promising approach for developing highly efficient SNN learning methods.

* 13 pages, 9 figures, submitted to the IEEE Transactions on Neural Networks and Learning Systems (TNNLS) 
Viaarxiv icon

APAN: Asynchronous Propagation Attention Network for Real-time Temporal Graph Embedding

Dec 16, 2020
Xuhong Wang, Ding Lyu, Mengjian Li, Yang Xia, Qi Yang, Xinwen Wang, Xinguang Wang, Ping Cui, Yupu Yang, Bowen Sun, Zhenyu Guo, Junkui Li

Figure 1 for APAN: Asynchronous Propagation Attention Network for Real-time Temporal Graph Embedding
Figure 2 for APAN: Asynchronous Propagation Attention Network for Real-time Temporal Graph Embedding
Figure 3 for APAN: Asynchronous Propagation Attention Network for Real-time Temporal Graph Embedding
Figure 4 for APAN: Asynchronous Propagation Attention Network for Real-time Temporal Graph Embedding

Limited by the time complexity of querying k-hop neighbors in a graph database, most graph algorithms cannot be deployed online and execute millisecond-level inference. This problem dramatically limits the potential of applying graph algorithms in certain areas, such as financial fraud detection. Therefore, we propose Asynchronous Propagation Attention Network, an asynchronous continuous time dynamic graph algorithm for real-time temporal graph embedding. Traditional graph models usually execute two serial operations: first graph computation and then model inference. We decouple model inference and graph computation step so that the heavy graph query operations will not damage the speed of model inference. Extensive experiments demonstrate that the proposed method can achieve competitive performance and 8.7 times inference speed improvement in the meantime.

* 10 pages. Submitted to SIGMOD 2021, Under Review 
Viaarxiv icon

APAN: Asynchronous Propagate Attention Network for Real-time Temporal Graph Embedding

Nov 27, 2020
Xuhong Wang, Ding Lyu, Mengjian Li, Yang Xia, Qi Yang, Xinwen Wang, Xinguang Wang, Ping Cui, Yupu Yang, Bowen Sun, Zhenyu Guo

Figure 1 for APAN: Asynchronous Propagate Attention Network for Real-time Temporal Graph Embedding
Figure 2 for APAN: Asynchronous Propagate Attention Network for Real-time Temporal Graph Embedding
Figure 3 for APAN: Asynchronous Propagate Attention Network for Real-time Temporal Graph Embedding
Figure 4 for APAN: Asynchronous Propagate Attention Network for Real-time Temporal Graph Embedding

Limited by the time complexity of querying k-hop neighbors in a graph database, most graph algorithms cannot be deployed online and execute millisecond-level inference. This problem dramatically limits the potential of applying graph algorithms in certain areas, such as financial fraud detection. Therefore, we propose Asynchronous Propagate Attention Network, an asynchronous continuous time dynamic graph algorithm for real-time temporal graph embedding. Traditional graph models usually execute two serial operations: first graph computation and then model inference. We decouple model inference and graph computation step so that the heavy graph query operations will not damage the speed of model inference. Extensive experiments demonstrate that the proposed method can achieve competitive performance and 8.7 times inference speed improvement in the meantime.

* 10 pages. Submitted to SIGMOD 2021, Under Review 
Viaarxiv icon