Alert button
Picture for Youru Li

Youru Li

Alert button

Node-oriented Spectral Filtering for Graph Neural Networks

Dec 07, 2022
Shuai Zheng, Zhenfeng Zhu, Zhizhe Liu, Youru Li, Yao Zhao

Figure 1 for Node-oriented Spectral Filtering for Graph Neural Networks
Figure 2 for Node-oriented Spectral Filtering for Graph Neural Networks
Figure 3 for Node-oriented Spectral Filtering for Graph Neural Networks
Figure 4 for Node-oriented Spectral Filtering for Graph Neural Networks

Graph neural networks (GNNs) have shown remarkable performance on homophilic graph data while being far less impressive when handling non-homophilic graph data due to the inherent low-pass filtering property of GNNs. In general, since the real-world graphs are often a complex mixture of diverse subgraph patterns, learning a universal spectral filter on the graph from the global perspective as in most current works may still suffer from great difficulty in adapting to the variation of local patterns. On the basis of the theoretical analysis on local patterns, we rethink the existing spectral filtering methods and propose the \textbf{\underline{N}}ode-oriented spectral \textbf{\underline{F}}iltering for \textbf{\underline{G}}raph \textbf{\underline{N}}eural \textbf{\underline{N}}etwork (namely NFGNN). By estimating the node-oriented spectral filter for each node, NFGNN is provided with the capability of precise local node positioning via the generalized translated operator, thus discriminating the variations of local homophily patterns adaptively. Meanwhile, the utilization of re-parameterization brings a good trade-off between global consistency and local sensibility for learning the node-oriented spectral filters. Furthermore, we theoretically analyze the localization property of NFGNN, demonstrating that the signal after adaptive filtering is still positioned around the corresponding node. Extensive experimental results demonstrate that the proposed NFGNN achieves more favorable performance.

Viaarxiv icon

HGV4Risk: Hierarchical Global View-guided Sequence Representation Learning for Risk Prediction

Nov 15, 2022
Youru Li, Zhenfeng Zhu, Xiaobo Guo, Shaoshuai Li, Yuchen Yang, Yao Zhao

Figure 1 for HGV4Risk: Hierarchical Global View-guided Sequence Representation Learning for Risk Prediction
Figure 2 for HGV4Risk: Hierarchical Global View-guided Sequence Representation Learning for Risk Prediction
Figure 3 for HGV4Risk: Hierarchical Global View-guided Sequence Representation Learning for Risk Prediction
Figure 4 for HGV4Risk: Hierarchical Global View-guided Sequence Representation Learning for Risk Prediction

Risk prediction, as a typical time series modeling problem, is usually achieved by learning trends in markers or historical behavior from sequence data, and has been widely applied in healthcare and finance. In recent years, deep learning models, especially Long Short-Term Memory neural networks (LSTMs), have led to superior performances in such sequence representation learning tasks. Despite that some attention or self-attention based models with time-aware or feature-aware enhanced strategies have achieved better performance compared with other temporal modeling methods, such improvement is limited due to a lack of guidance from global view. To address this issue, we propose a novel end-to-end Hierarchical Global View-guided (HGV) sequence representation learning framework. Specifically, the Global Graph Embedding (GGE) module is proposed to learn sequential clip-aware representations from temporal correlation graph at instance level. Furthermore, following the way of key-query attention, the harmonic $\beta$-attention ($\beta$-Attn) is also developed for making a global trade-off between time-aware decay and observation significance at channel level adaptively. Moreover, the hierarchical representations at both instance level and channel level can be coordinated by the heterogeneous information aggregation under the guidance of global view. Experimental results on a benchmark dataset for healthcare risk prediction, and a real-world industrial scenario for Small and Mid-size Enterprises (SMEs) credit overdue risk prediction in MYBank, Ant Group, have illustrated that the proposed model can achieve competitive prediction performance compared with other known baselines.

* 12 pages, 10 figures 
Viaarxiv icon

EA-LSTM: Evolutionary Attention-based LSTM for Time Series Prediction

Nov 09, 2018
Youru Li, Zhenfeng Zhu, Deqiang Kong, Hua Han, Yao Zhao

Figure 1 for EA-LSTM: Evolutionary Attention-based LSTM for Time Series Prediction
Figure 2 for EA-LSTM: Evolutionary Attention-based LSTM for Time Series Prediction
Figure 3 for EA-LSTM: Evolutionary Attention-based LSTM for Time Series Prediction
Figure 4 for EA-LSTM: Evolutionary Attention-based LSTM for Time Series Prediction

Time series prediction with deep learning methods, especially long short-term memory neural networks (LSTMs), have scored significant achievements in recent years. Despite the fact that the LSTMs can help to capture long-term dependencies, its ability to pay different degree of attention on sub-window feature within multiple time-steps is insufficient. To address this issue, an evolutionary attention-based LSTM training with competitive random search is proposed for multivariate time series prediction. By transferring shared parameters, an evolutionary attention learning approach is introduced to the LSTMs model. Thus, like that for biological evolution, the pattern for importance-based attention sampling can be confirmed during temporal relationship mining. To refrain from being trapped into partial optimization like traditional gradient-based methods, an evolutionary computation inspired competitive random search method is proposed, which can well configure the parameters in the attention layer. Experimental results have illustrated that the proposed model can achieve competetive prediction performance compared with other baseline methods.

Viaarxiv icon