Alert button
Picture for Yanming Shen

Yanming Shen

Alert button

To Copy Rather Than Memorize: A Vertical Learning Paradigm for Knowledge Graph Completion

May 23, 2023
Rui Li, Xu Chen, Chaozhuo Li, Yanming Shen, Jianan Zhao, Yujing Wang, Weihao Han, Hao Sun, Weiwei Deng, Qi Zhang, Xing Xie

Figure 1 for To Copy Rather Than Memorize: A Vertical Learning Paradigm for Knowledge Graph Completion
Figure 2 for To Copy Rather Than Memorize: A Vertical Learning Paradigm for Knowledge Graph Completion
Figure 3 for To Copy Rather Than Memorize: A Vertical Learning Paradigm for Knowledge Graph Completion
Figure 4 for To Copy Rather Than Memorize: A Vertical Learning Paradigm for Knowledge Graph Completion

Embedding models have shown great power in knowledge graph completion (KGC) task. By learning structural constraints for each training triple, these methods implicitly memorize intrinsic relation rules to infer missing links. However, this paper points out that the multi-hop relation rules are hard to be reliably memorized due to the inherent deficiencies of such implicit memorization strategy, making embedding models underperform in predicting links between distant entity pairs. To alleviate this problem, we present Vertical Learning Paradigm (VLP), which extends embedding models by allowing to explicitly copy target information from related factual triples for more accurate prediction. Rather than solely relying on the implicit memory, VLP directly provides additional cues to improve the generalization ability of embedding models, especially making the distant link prediction significantly easier. Moreover, we also propose a novel relative distance based negative sampling technique (ReD) for more effective optimization. Experiments demonstrate the validity and generality of our proposals on two standard benchmarks. Our code is available at https://github.com/rui9812/VLP.

* Accepted to ACL 2023 Main Conference (Long Paper) 
Viaarxiv icon

Towards Better Graph Representation Learning with Parameterized Decomposition & Filtering

May 10, 2023
Mingqi Yang, Wenjie Feng, Yanming Shen, Bryan Hooi

Figure 1 for Towards Better Graph Representation Learning with Parameterized Decomposition & Filtering
Figure 2 for Towards Better Graph Representation Learning with Parameterized Decomposition & Filtering
Figure 3 for Towards Better Graph Representation Learning with Parameterized Decomposition & Filtering
Figure 4 for Towards Better Graph Representation Learning with Parameterized Decomposition & Filtering

Proposing an effective and flexible matrix to represent a graph is a fundamental challenge that has been explored from multiple perspectives, e.g., filtering in Graph Fourier Transforms. In this work, we develop a novel and general framework which unifies many existing GNN models from the view of parameterized decomposition and filtering, and show how it helps to enhance the flexibility of GNNs while alleviating the smoothness and amplification issues of existing models. Essentially, we show that the extensively studied spectral graph convolutions with learnable polynomial filters are constrained variants of this formulation, and releasing these constraints enables our model to express the desired decomposition and filtering simultaneously. Based on this generalized framework, we develop models that are simple in implementation but achieve significant improvements and computational efficiency on a variety of graph learning tasks. Code is available at https://github.com/qslim/PDF.

* ICML 2023 
Viaarxiv icon

NodeTrans: A Graph Transfer Learning Approach for Traffic Prediction

Jul 04, 2022
Xueyan Yin, Feifan Li, Yanming Shen, Heng Qi, Baocai Yin

Figure 1 for NodeTrans: A Graph Transfer Learning Approach for Traffic Prediction
Figure 2 for NodeTrans: A Graph Transfer Learning Approach for Traffic Prediction
Figure 3 for NodeTrans: A Graph Transfer Learning Approach for Traffic Prediction
Figure 4 for NodeTrans: A Graph Transfer Learning Approach for Traffic Prediction

Recently, deep learning methods have made great progress in traffic prediction, but their performance depends on a large amount of historical data. In reality, we may face the data scarcity issue. In this case, deep learning models fail to obtain satisfactory performance. Transfer learning is a promising approach to solve the data scarcity issue. However, existing transfer learning approaches in traffic prediction are mainly based on regular grid data, which is not suitable for the inherent graph data in the traffic network. Moreover, existing graph-based models can only capture shared traffic patterns in the road network, and how to learn node-specific patterns is also a challenge. In this paper, we propose a novel transfer learning approach to solve the traffic prediction with few data, which can transfer the knowledge learned from a data-rich source domain to a data-scarce target domain. First, a spatial-temporal graph neural network is proposed, which can capture the node-specific spatial-temporal traffic patterns of different road networks. Then, to improve the robustness of transfer, we design a pattern-based transfer strategy, where we leverage a clustering-based mechanism to distill common spatial-temporal patterns in the source domain, and use these knowledge to further improve the prediction performance of the target domain. Experiments on real-world datasets verify the effectiveness of our approach.

Viaarxiv icon

Soft-mask: Adaptive Substructure Extractions for Graph Neural Networks

Jun 11, 2022
Mingqi Yang, Yanming Shen, Heng Qi, Baocai Yin

Figure 1 for Soft-mask: Adaptive Substructure Extractions for Graph Neural Networks
Figure 2 for Soft-mask: Adaptive Substructure Extractions for Graph Neural Networks
Figure 3 for Soft-mask: Adaptive Substructure Extractions for Graph Neural Networks
Figure 4 for Soft-mask: Adaptive Substructure Extractions for Graph Neural Networks

For learning graph representations, not all detailed structures within a graph are relevant to the given graph tasks. Task-relevant structures can be $localized$ or $sparse$ which are only involved in subgraphs or characterized by the interactions of subgraphs (a hierarchical perspective). A graph neural network should be able to efficiently extract task-relevant structures and be invariant to irrelevant parts, which is challenging for general message passing GNNs. In this work, we propose to learn graph representations from a sequence of subgraphs of the original graph to better capture task-relevant substructures or hierarchical structures and skip $noisy$ parts. To this end, we design soft-mask GNN layer to extract desired subgraphs through the mask mechanism. The soft-mask is defined in a continuous space to maintain the differentiability and characterize the weights of different parts. Compared with existing subgraph or hierarchical representation learning methods and graph pooling operations, the soft-mask GNN layer is not limited by the fixed sample or drop ratio, and therefore is more flexible to extract subgraphs with arbitrary sizes. Extensive experiments on public graph benchmarks show that soft-mask mechanism brings performance improvements. And it also provides interpretability where visualizing the values of masks in each layer allows us to have an insight into the structures learned by the model.

* The Web Conference (WWW), 2021 
Viaarxiv icon

HousE: Knowledge Graph Embedding with Householder Parameterization

Feb 16, 2022
Rui Li, Jianan Zhao, Chaozhuo Li, Di He, Yiqi Wang, Yuming Liu, Hao Sun, Senzhang Wang, Weiwei Deng, Yanming Shen, Xing Xie, Qi Zhang

Figure 1 for HousE: Knowledge Graph Embedding with Householder Parameterization
Figure 2 for HousE: Knowledge Graph Embedding with Householder Parameterization
Figure 3 for HousE: Knowledge Graph Embedding with Householder Parameterization
Figure 4 for HousE: Knowledge Graph Embedding with Householder Parameterization

The effectiveness of knowledge graph embedding (KGE) largely depends on the ability to model intrinsic relation patterns and mapping properties. However, existing approaches can only capture some of them with insufficient modeling capacity. In this work, we propose a more powerful KGE framework named HousE, which involves a novel parameterization based on two kinds of Householder transformations: (1) Householder rotations to achieve superior capacity of modeling relation patterns; (2) Householder projections to handle sophisticated relation mapping properties. Theoretically, HousE is capable of modeling crucial relation patterns and mapping properties simultaneously. Besides, HousE is a generalization of existing rotation-based models while extending the rotations to high-dimensional spaces. Empirically, HousE achieves new state-of-the-art performance on five benchmark datasets. Our code is available at https://github.com/anrep/HousE.

Viaarxiv icon

Improving Spectral Graph Convolution for Learning Graph-level Representation

Dec 14, 2021
Mingqi Yang, Rui Li, Yanming Shen, Heng Qi, Baocai Yin

Figure 1 for Improving Spectral Graph Convolution for Learning Graph-level Representation
Figure 2 for Improving Spectral Graph Convolution for Learning Graph-level Representation
Figure 3 for Improving Spectral Graph Convolution for Learning Graph-level Representation
Figure 4 for Improving Spectral Graph Convolution for Learning Graph-level Representation

From the original theoretically well-defined spectral graph convolution to the subsequent spatial bassed message-passing model, spatial locality (in vertex domain) acts as a fundamental principle of most graph neural networks (GNNs). In the spectral graph convolution, the filter is approximated by polynomials, where a $k$-order polynomial covers $k$-hop neighbors. In the message-passing, various definitions of neighbors used in aggregations are actually an extensive exploration of the spatial locality information. For learning node representations, the topological distance seems necessary since it characterizes the basic relations between nodes. However, for learning representations of the entire graphs, is it still necessary to hold? In this work, we show that such a principle is not necessary, it hinders most existing GNNs from efficiently encoding graph structures. By removing it, as well as the limitation of polynomial filters, the resulting new architecture significantly boosts performance on learning graph representations. We also study the effects of graph spectrum on signals and interpret various existing improvements as different spectrum smoothing techniques. It serves as a spatial understanding that quantitatively measures the effects of the spectrum to input signals in comparison to the well-known spectral understanding as high/low-pass filters. More importantly, it sheds the light on developing powerful graph representation models.

Viaarxiv icon

First Place Solution of KDD Cup 2021 & OGB Large-Scale Challenge Graph Prediction Track

Jun 20, 2021
Chengxuan Ying, Mingqi Yang, Shuxin Zheng, Guolin Ke, Shengjie Luo, Tianle Cai, Chenglin Wu, Yuxin Wang, Yanming Shen, Di He

Figure 1 for First Place Solution of KDD Cup 2021 & OGB Large-Scale Challenge Graph Prediction Track
Figure 2 for First Place Solution of KDD Cup 2021 & OGB Large-Scale Challenge Graph Prediction Track
Figure 3 for First Place Solution of KDD Cup 2021 & OGB Large-Scale Challenge Graph Prediction Track
Figure 4 for First Place Solution of KDD Cup 2021 & OGB Large-Scale Challenge Graph Prediction Track

In this technical report, we present our solution of KDD Cup 2021 OGB Large-Scale Challenge - PCQM4M-LSC Track. We adopt Graphormer and ExpC as our basic models. We train each model by 8-fold cross-validation, and additionally train two Graphormer models on the union of training and validation sets with different random seeds. For final submission, we use a naive ensemble for these 18 models by taking average of their outputs. Using our method, our team MachineLearning achieved 0.1200 MAE on test set, which won the first place in KDD Cup graph prediction track.

Viaarxiv icon

Do Transformers Really Perform Bad for Graph Representation?

Jun 17, 2021
Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu

Figure 1 for Do Transformers Really Perform Bad for Graph Representation?
Figure 2 for Do Transformers Really Perform Bad for Graph Representation?
Figure 3 for Do Transformers Really Perform Bad for Graph Representation?
Figure 4 for Do Transformers Really Perform Bad for Graph Representation?

The Transformer architecture has become a dominant choice in many domains, such as natural language processing and computer vision. Yet, it has not achieved competitive performance on popular leaderboards of graph-level prediction compared to mainstream GNN variants. Therefore, it remains a mystery how Transformers could perform well for graph representation learning. In this paper, we solve this mystery by presenting Graphormer, which is built upon the standard Transformer architecture, and could attain excellent results on a broad range of graph representation learning tasks, especially on the recent OGB Large-Scale Challenge. Our key insight to utilizing Transformer in the graph is the necessity of effectively encoding the structural information of a graph into the model. To this end, we propose several simple yet effective structural encoding methods to help Graphormer better model graph-structured data. Besides, we mathematically characterize the expressive power of Graphormer and exhibit that with our ways of encoding the structural information of graphs, many popular GNN variants could be covered as the special cases of Graphormer.

Viaarxiv icon

Breaking the Expressive Bottlenecks of Graph Neural Networks

Dec 14, 2020
Mingqi Yang, Yanming Shen, Heng Qi, Baocai Yin

Figure 1 for Breaking the Expressive Bottlenecks of Graph Neural Networks
Figure 2 for Breaking the Expressive Bottlenecks of Graph Neural Networks
Figure 3 for Breaking the Expressive Bottlenecks of Graph Neural Networks
Figure 4 for Breaking the Expressive Bottlenecks of Graph Neural Networks

Recently, the Weisfeiler-Lehman (WL) graph isomorphism test was used to measure the expressiveness of graph neural networks (GNNs), showing that the neighborhood aggregation GNNs were at most as powerful as 1-WL test in distinguishing graph structures. There were also improvements proposed in analogy to $k$-WL test ($k>1$). However, the aggregators in these GNNs are far from injective as required by the WL test, and suffer from weak distinguishing strength, making it become expressive bottlenecks. In this paper, we improve the expressiveness by exploring powerful aggregators. We reformulate aggregation with the corresponding aggregation coefficient matrix, and then systematically analyze the requirements of the aggregation coefficient matrix for building more powerful aggregators and even injective aggregators. It can also be viewed as the strategy for preserving the rank of hidden features, and implies that basic aggregators correspond to a special case of low-rank transformations. We also show the necessity of applying nonlinear units ahead of aggregation, which is different from most aggregation-based GNNs. Based on our theoretical analysis, we develop two GNN layers, ExpandingConv and CombConv. Experimental results show that our models significantly boost performance, especially for large and densely connected graphs.

Viaarxiv icon