Alert button
Picture for Senzhang Wang

Senzhang Wang

Alert button

sasdim: self-adaptive noise scaling diffusion model for spatial time series imputation

Sep 05, 2023
Shunyang Zhang, Senzhang Wang, Xianzhen Tan, Ruochen Liu, Jian Zhang, Jianxin Wang

Figure 1 for sasdim: self-adaptive noise scaling diffusion model for spatial time series imputation
Figure 2 for sasdim: self-adaptive noise scaling diffusion model for spatial time series imputation
Figure 3 for sasdim: self-adaptive noise scaling diffusion model for spatial time series imputation
Figure 4 for sasdim: self-adaptive noise scaling diffusion model for spatial time series imputation

Spatial time series imputation is critically important to many real applications such as intelligent transportation and air quality monitoring. Although recent transformer and diffusion model based approaches have achieved significant performance gains compared with conventional statistic based methods, spatial time series imputation still remains as a challenging issue due to the complex spatio-temporal dependencies and the noise uncertainty of the spatial time series data. Especially, recent diffusion process based models may introduce random noise to the imputations, and thus cause negative impact on the model performance. To this end, we propose a self-adaptive noise scaling diffusion model named SaSDim to more effectively perform spatial time series imputation. Specially, we propose a new loss function that can scale the noise to the similar intensity, and propose the across spatial-temporal global convolution module to more effectively capture the dynamic spatial-temporal dependencies. Extensive experiments conducted on three real world datasets verify the effectiveness of SaSDim by comparison with current state-of-the-art baselines.

Viaarxiv icon

Can Transformer and GNN Help Each Other?

Aug 28, 2023
Peiyan Zhang, Yuchen Yan, Chaozhuo Li, Senzhang Wang, Xing Xie, Sunghun Kim

Figure 1 for Can Transformer and GNN Help Each Other?
Figure 2 for Can Transformer and GNN Help Each Other?
Figure 3 for Can Transformer and GNN Help Each Other?
Figure 4 for Can Transformer and GNN Help Each Other?

Although Transformer has achieved great success in natural language process and computer vision, it has difficulty generalizing to medium and large-scale graph data for two important reasons: (i) High complexity. (ii) Failing to capture the complex and entangled structure information. In graph representation learning, Graph Neural Networks(GNNs) can fuse the graph structure and node attributes but have limited receptive fields. Therefore, we question whether can we combine Transformers and GNNs to help each other. In this paper, we propose a new model named TransGNN where the Transformer layer and GNN layer are used alternately to improve each other. Specifically, to expand the receptive field and disentangle the information aggregation from edges, we propose using Transformer to aggregate more relevant nodes' information to improve the message passing of GNNs. Besides, to capture the graph structure information, we utilize positional encoding and make use of the GNN layer to fuse the structure into node attributes, which improves the Transformer in graph data. We also propose to sample the most relevant nodes for Transformer and two efficient samples update strategies to lower the complexity. At last, we theoretically prove that TransGNN is more expressive than GNNs only with extra linear complexity. The experiments on eight datasets corroborate the effectiveness of TransGNN on node and graph classification tasks.

Viaarxiv icon

Continual Learning on Dynamic Graphs via Parameter Isolation

May 23, 2023
Peiyan Zhang, Yuchen Yan, Chaozhuo Li, Senzhang Wang, Xing Xie, Guojie Song, Sunghun Kim

Figure 1 for Continual Learning on Dynamic Graphs via Parameter Isolation
Figure 2 for Continual Learning on Dynamic Graphs via Parameter Isolation
Figure 3 for Continual Learning on Dynamic Graphs via Parameter Isolation
Figure 4 for Continual Learning on Dynamic Graphs via Parameter Isolation

Many real-world graph learning tasks require handling dynamic graphs where new nodes and edges emerge. Dynamic graph learning methods commonly suffer from the catastrophic forgetting problem, where knowledge learned for previous graphs is overwritten by updates for new graphs. To alleviate the problem, continual graph learning methods are proposed. However, existing continual graph learning methods aim to learn new patterns and maintain old ones with the same set of parameters of fixed size, and thus face a fundamental tradeoff between both goals. In this paper, we propose Parameter Isolation GNN (PI-GNN) for continual learning on dynamic graphs that circumvents the tradeoff via parameter isolation and expansion. Our motivation lies in that different parameters contribute to learning different graph patterns. Based on the idea, we expand model parameters to continually learn emerging graph patterns. Meanwhile, to effectively preserve knowledge for unaffected patterns, we find parameters that correspond to them via optimization and freeze them to prevent them from being rewritten. Experiments on eight real-world datasets corroborate the effectiveness of PI-GNN compared to state-of-the-art baselines.

Viaarxiv icon

Robust Graph Structure Learning over Images via Multiple Statistical Tests

Oct 08, 2022
Yaohua Wang, FangYi Zhang, Ming Lin, Senzhang Wang, Xiuyu Sun, Rong Jin

Figure 1 for Robust Graph Structure Learning over Images via Multiple Statistical Tests
Figure 2 for Robust Graph Structure Learning over Images via Multiple Statistical Tests
Figure 3 for Robust Graph Structure Learning over Images via Multiple Statistical Tests
Figure 4 for Robust Graph Structure Learning over Images via Multiple Statistical Tests

Graph structure learning aims to learn connectivity in a graph from data. It is particularly important for many computer vision related tasks since no explicit graph structure is available for images for most cases. A natural way to construct a graph among images is to treat each image as a node and assign pairwise image similarities as weights to corresponding edges. It is well known that pairwise similarities between images are sensitive to the noise in feature representations, leading to unreliable graph structures. We address this problem from the viewpoint of statistical tests. By viewing the feature vector of each node as an independent sample, the decision of whether creating an edge between two nodes based on their similarity in feature representation can be thought as a ${\it single}$ statistical test. To improve the robustness in the decision of creating an edge, multiple samples are drawn and integrated by ${\it multiple}$ statistical tests to generate a more reliable similarity measure, consequentially more reliable graph structure. The corresponding elegant matrix form named $\mathcal{B}\textbf{-Attention}$ is designed for efficiency. The effectiveness of multiple tests for graph structure learning is verified both theoretically and empirically on multiple clustering and ReID benchmark datasets. Source codes are available at https://github.com/Thomas-wyh/B-Attention.

* Accepted by the NeurIPS 2022. Homepage: https://thomas-wyh.github.io/ 
Viaarxiv icon

Geometric Interaction Augmented Graph Collaborative Filtering

Aug 02, 2022
Yiding Zhang, Chaozhuo Li, Senzhang Wang, Jianxun Lian, Xing Xie

Figure 1 for Geometric Interaction Augmented Graph Collaborative Filtering
Figure 2 for Geometric Interaction Augmented Graph Collaborative Filtering
Figure 3 for Geometric Interaction Augmented Graph Collaborative Filtering
Figure 4 for Geometric Interaction Augmented Graph Collaborative Filtering

Graph-based collaborative filtering is capable of capturing the essential and abundant collaborative signals from the high-order interactions, and thus received increasingly research interests. Conventionally, the embeddings of users and items are defined in the Euclidean spaces, along with the propagation on the interaction graphs. Meanwhile, recent works point out that the high-order interactions naturally form up the tree-likeness structures, which the hyperbolic models thrive on. However, the interaction graphs inherently exhibit the hybrid and nested geometric characteristics, while the existing single geometry-based models are inadequate to fully capture such sophisticated topological patterns. In this paper, we propose to model the user-item interactions in a hybrid geometric space, in which the merits of Euclidean and hyperbolic spaces are simultaneously enjoyed to learn expressive representations. Experimental results on public datasets validate the effectiveness of our proposal.

Viaarxiv icon

Reconstruction Enhanced Multi-View Contrastive Learning for Anomaly Detection on Attributed Networks

May 10, 2022
Jiaqiang Zhang, Senzhang Wang, Songcan Chen

Figure 1 for Reconstruction Enhanced Multi-View Contrastive Learning for Anomaly Detection on Attributed Networks
Figure 2 for Reconstruction Enhanced Multi-View Contrastive Learning for Anomaly Detection on Attributed Networks
Figure 3 for Reconstruction Enhanced Multi-View Contrastive Learning for Anomaly Detection on Attributed Networks
Figure 4 for Reconstruction Enhanced Multi-View Contrastive Learning for Anomaly Detection on Attributed Networks

Detecting abnormal nodes from attributed networks is of great importance in many real applications, such as financial fraud detection and cyber security. This task is challenging due to both the complex interactions between the anomalous nodes with other counterparts and their inconsistency in terms of attributes. This paper proposes a self-supervised learning framework that jointly optimizes a multi-view contrastive learning-based module and an attribute reconstruction-based module to more accurately detect anomalies on attributed networks. Specifically, two contrastive learning views are firstly established, which allow the model to better encode rich local and global information related to the abnormality. Motivated by the attribute consistency principle between neighboring nodes, a masked autoencoder-based reconstruction module is also introduced to identify the nodes which have large reconstruction errors, then are regarded as anomalies. Finally, the two complementary modules are integrated for more accurately detecting the anomalous nodes. Extensive experiments conducted on five benchmark datasets show our model outperforms current state-of-the-art models.

* Accepted at IJCAI-ECAI 2022 
Viaarxiv icon

HousE: Knowledge Graph Embedding with Householder Parameterization

Feb 16, 2022
Rui Li, Jianan Zhao, Chaozhuo Li, Di He, Yiqi Wang, Yuming Liu, Hao Sun, Senzhang Wang, Weiwei Deng, Yanming Shen, Xing Xie, Qi Zhang

Figure 1 for HousE: Knowledge Graph Embedding with Householder Parameterization
Figure 2 for HousE: Knowledge Graph Embedding with Householder Parameterization
Figure 3 for HousE: Knowledge Graph Embedding with Householder Parameterization
Figure 4 for HousE: Knowledge Graph Embedding with Householder Parameterization

The effectiveness of knowledge graph embedding (KGE) largely depends on the ability to model intrinsic relation patterns and mapping properties. However, existing approaches can only capture some of them with insufficient modeling capacity. In this work, we propose a more powerful KGE framework named HousE, which involves a novel parameterization based on two kinds of Householder transformations: (1) Householder rotations to achieve superior capacity of modeling relation patterns; (2) Householder projections to handle sophisticated relation mapping properties. Theoretically, HousE is capable of modeling crucial relation patterns and mapping properties simultaneously. Besides, HousE is a generalization of existing rotation-based models while extending the rotations to high-dimensional spaces. Empirically, HousE achieves new state-of-the-art performance on five benchmark datasets. Our code is available at https://github.com/anrep/HousE.

Viaarxiv icon

Ada-NETS: Face Clustering via Adaptive Neighbour Discovery in the Structure Space

Feb 08, 2022
Yaohua Wang, Yaobin Zhang, Fangyi Zhang, Senzhang Wang, Ming Lin, YuQi Zhang, Xiuyu Sun

Figure 1 for Ada-NETS: Face Clustering via Adaptive Neighbour Discovery in the Structure Space
Figure 2 for Ada-NETS: Face Clustering via Adaptive Neighbour Discovery in the Structure Space
Figure 3 for Ada-NETS: Face Clustering via Adaptive Neighbour Discovery in the Structure Space
Figure 4 for Ada-NETS: Face Clustering via Adaptive Neighbour Discovery in the Structure Space

Face clustering has attracted rising research interest recently to take advantage of massive amounts of face images on the web. State-of-the-art performance has been achieved by Graph Convolutional Networks (GCN) due to their powerful representation capacity. However, existing GCN-based methods build face graphs mainly according to kNN relations in the feature space, which may lead to a lot of noise edges connecting two faces of different classes. The face features will be polluted when messages pass along these noise edges, thus degrading the performance of GCNs. In this paper, a novel algorithm named Ada-NETS is proposed to cluster faces by constructing clean graphs for GCNs. In Ada-NETS, each face is transformed to a new structure space, obtaining robust features by considering face features of the neighbour images. Then, an adaptive neighbour discovery strategy is proposed to determine a proper number of edges connecting to each face image. It significantly reduces the noise edges while maintaining the good ones to build a graph with clean yet rich edges for GCNs to cluster faces. Experiments on multiple public clustering datasets show that Ada-NETS significantly outperforms current state-of-the-art methods, proving its superiority and generalization.

Viaarxiv icon

ACE-HGNN: Adaptive Curvature Exploration Hyperbolic Graph Neural Network

Oct 15, 2021
Xingcheng Fu, Jianxin Li, Jia Wu, Qingyun Sun, Cheng Ji, Senzhang Wang, Jiajun Tan, Hao Peng, Philip S. Yu

Figure 1 for ACE-HGNN: Adaptive Curvature Exploration Hyperbolic Graph Neural Network
Figure 2 for ACE-HGNN: Adaptive Curvature Exploration Hyperbolic Graph Neural Network
Figure 3 for ACE-HGNN: Adaptive Curvature Exploration Hyperbolic Graph Neural Network
Figure 4 for ACE-HGNN: Adaptive Curvature Exploration Hyperbolic Graph Neural Network

Graph Neural Networks (GNNs) have been widely studied in various graph data mining tasks. Most existingGNNs embed graph data into Euclidean space and thus are less effective to capture the ubiquitous hierarchical structures in real-world networks. Hyperbolic Graph Neural Networks(HGNNs) extend GNNs to hyperbolic space and thus are more effective to capture the hierarchical structures of graphs in node representation learning. In hyperbolic geometry, the graph hierarchical structure can be reflected by the curvatures of the hyperbolic space, and different curvatures can model different hierarchical structures of a graph. However, most existing HGNNs manually set the curvature to a fixed value for simplicity, which achieves a suboptimal performance of graph learning due to the complex and diverse hierarchical structures of the graphs. To resolve this problem, we propose an Adaptive Curvature Exploration Hyperbolic Graph NeuralNetwork named ACE-HGNN to adaptively learn the optimal curvature according to the input graph and downstream tasks. Specifically, ACE-HGNN exploits a multi-agent reinforcement learning framework and contains two agents, ACE-Agent andHGNN-Agent for learning the curvature and node representations, respectively. The two agents are updated by a NashQ-leaning algorithm collaboratively, seeking the optimal hyperbolic space indexed by the curvature. Extensive experiments on multiple real-world graph datasets demonstrate a significant and consistent performance improvement in model quality with competitive performance and good generalization ability.

Viaarxiv icon

Session-based Recommendation with Heterogeneous Graph Neural Network

Aug 12, 2021
Jinpeng Chen, Haiyang Li, Fan Zhang, Senzhang Wang, Kaimin Wei

Figure 1 for Session-based Recommendation with Heterogeneous Graph Neural Network
Figure 2 for Session-based Recommendation with Heterogeneous Graph Neural Network
Figure 3 for Session-based Recommendation with Heterogeneous Graph Neural Network
Figure 4 for Session-based Recommendation with Heterogeneous Graph Neural Network

The purpose of the Session-Based Recommendation System is to predict the user's next click according to the previous session sequence. The current studies generally learn user preferences according to the transitions of items in the user's session sequence. However, other effective information in the session sequence, such as user profiles, are largely ignored which may lead to the model unable to learn the user's specific preferences. In this paper, we propose a heterogeneous graph neural network-based session recommendation method, named SR-HetGNN, which can learn session embeddings by heterogeneous graph neural network (HetGNN), and capture the specific preferences of anonymous users. Specifically, SR-HetGNN first constructs heterogeneous graphs containing various types of nodes according to the session sequence, which can capture the dependencies among items, users, and sessions. Second, HetGNN captures the complex transitions between items and learns the item embeddings containing user information. Finally, to consider the influence of users' long and short-term preferences, local and global session embeddings are combined with the attentional network to obtain the final session embedding. SR-HetGNN is shown to be superior to the existing state-of-the-art session-based recommendation methods through extensive experiments over two real large datasets Diginetica and Tmall.

Viaarxiv icon