Alert button
Picture for Jiawei Jiang

Jiawei Jiang

Alert button

Generative and Contrastive Paradigms Are Complementary for Graph Self-Supervised Learning

Oct 24, 2023
Yuxiang Wang, Xiao Yan, Chuang Hu, Fangcheng Fu, Wentao Zhang, Hao Wang, Shuo Shang, Jiawei Jiang

Figure 1 for Generative and Contrastive Paradigms Are Complementary for Graph Self-Supervised Learning
Figure 2 for Generative and Contrastive Paradigms Are Complementary for Graph Self-Supervised Learning
Figure 3 for Generative and Contrastive Paradigms Are Complementary for Graph Self-Supervised Learning
Figure 4 for Generative and Contrastive Paradigms Are Complementary for Graph Self-Supervised Learning

For graph self-supervised learning (GSSL), masked autoencoder (MAE) follows the generative paradigm and learns to reconstruct masked graph edges or node features. Contrastive Learning (CL) maximizes the similarity between augmented views of the same graph and is widely used for GSSL. However, MAE and CL are considered separately in existing works for GSSL. We observe that the MAE and CL paradigms are complementary and propose the graph contrastive masked autoencoder (GCMAE) framework to unify them. Specifically, by focusing on local edges or node features, MAE cannot capture global information of the graph and is sensitive to particular edges and features. On the contrary, CL excels in extracting global information because it considers the relation between graphs. As such, we equip GCMAE with an MAE branch and a CL branch, and the two branches share a common encoder, which allows the MAE branch to exploit the global information extracted by the CL branch. To force GCMAE to capture global graph structures, we train it to reconstruct the entire adjacency matrix instead of only the masked edges as in existing works. Moreover, a discrimination loss is proposed for feature reconstruction, which improves the disparity between node embeddings rather than reducing the reconstruction error to tackle the feature smoothing problem of MAE. We evaluate GCMAE on four popular graph tasks (i.e., node classification, node clustering, link prediction, and graph classification) and compare with 14 state-of-the-art baselines. The results show that GCMAE consistently provides good accuracy across these tasks, and the maximum accuracy improvement is up to 3.2% compared with the best-performing baseline.

Viaarxiv icon

BenchTemp: A General Benchmark for Evaluating Temporal Graph Neural Networks

Aug 31, 2023
Qiang Huang, Jiawei Jiang, Xi Susie Rao, Ce Zhang, Zhichao Han, Zitao Zhang, Xin Wang, Yongjun He, Quanqing Xu, Yang Zhao, Chuang Hu, Shuo Shang, Bo Du

Figure 1 for BenchTemp: A General Benchmark for Evaluating Temporal Graph Neural Networks
Figure 2 for BenchTemp: A General Benchmark for Evaluating Temporal Graph Neural Networks
Figure 3 for BenchTemp: A General Benchmark for Evaluating Temporal Graph Neural Networks
Figure 4 for BenchTemp: A General Benchmark for Evaluating Temporal Graph Neural Networks

To handle graphs in which features or connectivities are evolving over time, a series of temporal graph neural networks (TGNNs) have been proposed. Despite the success of these TGNNs, the previous TGNN evaluations reveal several limitations regarding four critical issues: 1) inconsistent datasets, 2) inconsistent evaluation pipelines, 3) lacking workload diversity, and 4) lacking efficient comparison. Overall, there lacks an empirical study that puts TGNN models onto the same ground and compares them comprehensively. To this end, we propose BenchTemp, a general benchmark for evaluating TGNN models on various workloads. BenchTemp provides a set of benchmark datasets so that different TGNN models can be fairly compared. Further, BenchTemp engineers a standard pipeline that unifies the TGNN evaluation. With BenchTemp, we extensively compare the representative TGNN models on different tasks (e.g., link prediction and node classification) and settings (transductive and inductive), w.r.t. both effectiveness and efficiency metrics. We have made BenchTemp publicly available at https://github.com/qianghuangwhu/benchtemp.

* 28 pages, 23 figures, 27 tables. Submitted to the Conference on Neural Information Processing Systems 2023 Track on Datasets and Benchmarks 
Viaarxiv icon

Unified Data Management and Comprehensive Performance Evaluation for Urban Spatial-Temporal Prediction [Experiment, Analysis & Benchmark]

Aug 24, 2023
Jiawei Jiang, Chengkai Han, Wayne Xin Zhao, Jingyuan Wang

Figure 1 for Unified Data Management and Comprehensive Performance Evaluation for Urban Spatial-Temporal Prediction [Experiment, Analysis & Benchmark]
Figure 2 for Unified Data Management and Comprehensive Performance Evaluation for Urban Spatial-Temporal Prediction [Experiment, Analysis & Benchmark]
Figure 3 for Unified Data Management and Comprehensive Performance Evaluation for Urban Spatial-Temporal Prediction [Experiment, Analysis & Benchmark]
Figure 4 for Unified Data Management and Comprehensive Performance Evaluation for Urban Spatial-Temporal Prediction [Experiment, Analysis & Benchmark]

The field of urban spatial-temporal prediction is advancing rapidly with the development of deep learning techniques and the availability of large-scale datasets. However, challenges persist in accessing and utilizing diverse urban spatial-temporal datasets from different sources and stored in different formats, as well as determining effective model structures and components with the proliferation of deep learning models. This work addresses these challenges and provides three significant contributions. Firstly, we introduce "atomic files", a unified storage format designed for urban spatial-temporal big data, and validate its effectiveness on 40 diverse datasets, simplifying data management. Secondly, we present a comprehensive overview of technological advances in urban spatial-temporal prediction models, guiding the development of robust models. Thirdly, we conduct extensive experiments using diverse models and datasets, establishing a performance leaderboard and identifying promising research directions. Overall, this work effectively manages urban spatial-temporal data, guides future efforts, and facilitates the development of accurate and efficient urban spatial-temporal prediction models. It can potentially make long-term contributions to urban spatial-temporal data management and prediction, ultimately leading to improved urban living standards.

* 14 pages, 3 figures. arXiv admin note: text overlap with arXiv:2304.14343 
Viaarxiv icon

Towards Efficient and Comprehensive Urban Spatial-Temporal Prediction: A Unified Library and Performance Benchmark

Apr 29, 2023
Jingyuan Wang, Jiawei Jiang, Wenjun Jiang, Chengkai Han, Wayne Xin Zhao

Figure 1 for Towards Efficient and Comprehensive Urban Spatial-Temporal Prediction: A Unified Library and Performance Benchmark
Figure 2 for Towards Efficient and Comprehensive Urban Spatial-Temporal Prediction: A Unified Library and Performance Benchmark
Figure 3 for Towards Efficient and Comprehensive Urban Spatial-Temporal Prediction: A Unified Library and Performance Benchmark
Figure 4 for Towards Efficient and Comprehensive Urban Spatial-Temporal Prediction: A Unified Library and Performance Benchmark

As deep learning technology advances and more urban spatial-temporal data accumulates, an increasing number of deep learning models are being proposed to solve urban spatial-temporal prediction problems. However, there are limitations in the existing field, including open-source data being in various formats and difficult to use, few papers making their code and data openly available, and open-source models often using different frameworks and platforms, making comparisons challenging. A standardized framework is urgently needed to implement and evaluate these methods. To address these issues, we provide a comprehensive review of urban spatial-temporal prediction and propose a unified storage format for spatial-temporal data called atomic files. We also propose LibCity, an open-source library that offers researchers a credible experimental tool and a convenient development framework. In this library, we have reproduced 65 spatial-temporal prediction models and collected 55 spatial-temporal datasets, allowing researchers to conduct comprehensive experiments conveniently. Using LibCity, we conducted a series of experiments to validate the effectiveness of different models and components, and we summarized promising future technology developments and research directions for spatial-temporal prediction. By enabling fair model comparisons, designing a unified data storage format, and simplifying the process of developing new models, LibCity is poised to make significant contributions to the spatial-temporal prediction field.

Viaarxiv icon

Two-stage MR Image Segmentation Method for Brain Tumors based on Attention Mechanism

Apr 17, 2023
Li Zhu, Jiawei Jiang, Lin Lu, Jin Li

Figure 1 for Two-stage MR Image Segmentation Method for Brain Tumors based on Attention Mechanism
Figure 2 for Two-stage MR Image Segmentation Method for Brain Tumors based on Attention Mechanism
Figure 3 for Two-stage MR Image Segmentation Method for Brain Tumors based on Attention Mechanism
Figure 4 for Two-stage MR Image Segmentation Method for Brain Tumors based on Attention Mechanism

Multimodal magnetic resonance imaging (MRI) can reveal different patterns of human tissue and is crucial for clinical diagnosis. However, limited by cost, noise and manual labeling, obtaining diverse and reliable multimodal MR images remains a challenge. For the same lesion, different MRI manifestations have great differences in background information, coarse positioning and fine structure. In order to obtain better generation and segmentation performance, a coordination-spatial attention generation adversarial network (CASP-GAN) based on the cycle-consistent generative adversarial network (CycleGAN) is proposed. The performance of the generator is optimized by introducing the Coordinate Attention (CA) module and the Spatial Attention (SA) module. The two modules can make full use of the captured location information, accurately locating the interested region, and enhancing the generator model network structure. The ability to extract the structure information and the detailed information of the original medical image can help generate the desired image with higher quality. There exist some problems in the original CycleGAN that the training time is long, the parameter amount is too large, and it is difficult to converge. In response to this problem, we introduce the Coordinate Attention (CA) module to replace the Res Block to reduce the number of parameters, and cooperate with the spatial information extraction network above to strengthen the information extraction ability. On the basis of CASP-GAN, an attentional generative cross-modality segmentation (AGCMS) method is further proposed. This method inputs the modalities generated by CASP-GAN and the real modalities into the segmentation network for brain tumor segmentation. Experimental results show that CASP-GAN outperforms CycleGAN and some state-of-the-art methods in PSNR, SSMI and RMSE in most tasks.

Viaarxiv icon

GA-HQS: MRI reconstruction via a generically accelerated unfolding approach

Apr 06, 2023
Jiawei Jiang, Yuchao Feng, Honghui Xu, Wanjun Chen, Jianwei Zheng

Figure 1 for GA-HQS: MRI reconstruction via a generically accelerated unfolding approach
Figure 2 for GA-HQS: MRI reconstruction via a generically accelerated unfolding approach
Figure 3 for GA-HQS: MRI reconstruction via a generically accelerated unfolding approach
Figure 4 for GA-HQS: MRI reconstruction via a generically accelerated unfolding approach

Deep unfolding networks (DUNs) are the foremost methods in the realm of compressed sensing MRI, as they can employ learnable networks to facilitate interpretable forward-inference operators. However, several daunting issues still exist, including the heavy dependency on the first-order optimization algorithms, the insufficient information fusion mechanisms, and the limitation of capturing long-range relationships. To address the issues, we propose a Generically Accelerated Half-Quadratic Splitting (GA-HQS) algorithm that incorporates second-order gradient information and pyramid attention modules for the delicate fusion of inputs at the pixel level. Moreover, a multi-scale split transformer is also designed to enhance the global feature representation. Comprehensive experiments demonstrate that our method surpasses previous ones on single-coil MRI acceleration tasks.

Viaarxiv icon

BUAA_BIGSCity: Spatial-Temporal Graph Neural Network for Wind Power Forecasting in Baidu KDD CUP 2022

Feb 22, 2023
Jiawei Jiang, Chengkai Han, Jingyuan Wang

Figure 1 for BUAA_BIGSCity: Spatial-Temporal Graph Neural Network for Wind Power Forecasting in Baidu KDD CUP 2022
Figure 2 for BUAA_BIGSCity: Spatial-Temporal Graph Neural Network for Wind Power Forecasting in Baidu KDD CUP 2022
Figure 3 for BUAA_BIGSCity: Spatial-Temporal Graph Neural Network for Wind Power Forecasting in Baidu KDD CUP 2022
Figure 4 for BUAA_BIGSCity: Spatial-Temporal Graph Neural Network for Wind Power Forecasting in Baidu KDD CUP 2022

In this technical report, we present our solution for the Baidu KDD Cup 2022 Spatial Dynamic Wind Power Forecasting Challenge. Wind power is a rapidly growing source of clean energy. Accurate wind power forecasting is essential for grid stability and the security of supply. Therefore, organizers provide a wind power dataset containing historical data from 134 wind turbines and launch the Baidu KDD Cup 2022 to examine the limitations of current methods for wind power forecasting. The average of RMSE (Root Mean Square Error) and MAE (Mean Absolute Error) is used as the evaluation score. We adopt two spatial-temporal graph neural network models, i.e., AGCRN and MTGNN, as our basic models. We train AGCRN by 5-fold cross-validation and additionally train MTGNN directly on the training and validation sets. Finally, we ensemble the two models based on the loss values of the validation set as our final submission. Using our method, our team \team achieves -45.36026 on the test set. We release our codes on Github (https://github.com/BUAABIGSCity/KDDCUP2022) for reproduction.

* 6 pages, 4 figures, Report for ACM KDD Workshop - Baidu KDD CUP 2022 
Viaarxiv icon

PDFormer: Propagation Delay-aware Dynamic Long-range Transformer for Traffic Flow Prediction

Jan 19, 2023
Jiawei Jiang, Chengkai Han, Wayne Xin Zhao, Jingyuan Wang

Figure 1 for PDFormer: Propagation Delay-aware Dynamic Long-range Transformer for Traffic Flow Prediction
Figure 2 for PDFormer: Propagation Delay-aware Dynamic Long-range Transformer for Traffic Flow Prediction
Figure 3 for PDFormer: Propagation Delay-aware Dynamic Long-range Transformer for Traffic Flow Prediction
Figure 4 for PDFormer: Propagation Delay-aware Dynamic Long-range Transformer for Traffic Flow Prediction

As a core technology of Intelligent Transportation System, traffic flow prediction has a wide range of applications. The fundamental challenge in traffic flow prediction is to effectively model the complex spatial-temporal dependencies in traffic data. Spatial-temporal Graph Neural Network (GNN) models have emerged as one of the most promising methods to solve this problem. However, GNN-based models have three major limitations for traffic prediction: i) Most methods model spatial dependencies in a static manner, which limits the ability to learn dynamic urban traffic patterns; ii) Most methods only consider short-range spatial information and are unable to capture long-range spatial dependencies; iii) These methods ignore the fact that the propagation of traffic conditions between locations has a time delay in traffic systems. To this end, we propose a novel Propagation Delay-aware dynamic long-range transFormer, namely PDFormer, for accurate traffic flow prediction. Specifically, we design a spatial self-attention module to capture the dynamic spatial dependencies. Then, two graph masking matrices are introduced to highlight spatial dependencies from short- and long-range views. Moreover, a traffic delay-aware feature transformation module is proposed to empower PDFormer with the capability of explicitly modeling the time delay of spatial information propagation. Extensive experimental results on six real-world public traffic datasets show that our method can not only achieve state-of-the-art performance but also exhibit competitive computational efficiency. Moreover, we visualize the learned spatial-temporal attention map to make our model highly interpretable.

* 9 pages, 5 figures, Accepted by AAAI2023 
Viaarxiv icon

Continuous Trajectory Generation Based on Two-Stage GAN

Jan 16, 2023
Wenjun Jiang, Wayne Xin Zhao, Jingyuan Wang, Jiawei Jiang

Figure 1 for Continuous Trajectory Generation Based on Two-Stage GAN
Figure 2 for Continuous Trajectory Generation Based on Two-Stage GAN
Figure 3 for Continuous Trajectory Generation Based on Two-Stage GAN
Figure 4 for Continuous Trajectory Generation Based on Two-Stage GAN

Simulating the human mobility and generating large-scale trajectories are of great use in many real-world applications, such as urban planning, epidemic spreading analysis, and geographic privacy protect. Although many previous works have studied the problem of trajectory generation, the continuity of the generated trajectories has been neglected, which makes these methods useless for practical urban simulation scenarios. To solve this problem, we propose a novel two-stage generative adversarial framework to generate the continuous trajectory on the road network, namely TS-TrajGen, which efficiently integrates prior domain knowledge of human mobility with model-free learning paradigm. Specifically, we build the generator under the human mobility hypothesis of the A* algorithm to learn the human mobility behavior. For the discriminator, we combine the sequential reward with the mobility yaw reward to enhance the effectiveness of the generator. Finally, we propose a novel two-stage generation process to overcome the weak point of the existing stochastic generation process. Extensive experiments on two real-world datasets and two case studies demonstrate that our framework yields significant improvements over the state-of-the-art methods.

* Accepted by AAAI23 
Viaarxiv icon