Alert button
Picture for Beatrice Bevilacqua

Beatrice Bevilacqua

Alert button

An OOD Multi-Task Perspective for Link Prediction with New Relation Types and Nodes

Jul 12, 2023
Jincheng Zhou, Beatrice Bevilacqua, Bruno Ribeiro

Figure 1 for An OOD Multi-Task Perspective for Link Prediction with New Relation Types and Nodes
Figure 2 for An OOD Multi-Task Perspective for Link Prediction with New Relation Types and Nodes
Figure 3 for An OOD Multi-Task Perspective for Link Prediction with New Relation Types and Nodes
Figure 4 for An OOD Multi-Task Perspective for Link Prediction with New Relation Types and Nodes

The task of inductive link prediction in (discrete) attributed multigraphs infers missing attributed links (relations) between nodes in new test multigraphs. Traditional relational learning methods face the challenge of limited generalization to OOD test multigraphs containing both novel nodes and novel relation types not seen in training. Recently, under the only assumption that all relation types share the same structural predictive patterns (single task), Gao et al. (2023) proposed an OOD link prediction method using the theoretical concept of double exchangeability (for nodes & relation types), in contrast to the (single) exchangeability (only for nodes) used to design Graph Neural Networks (GNNs). In this work we further extend the double exchangeability concept to multi-task double exchangeability, where we define link prediction in attributed multigraphs that can have distinct and potentially conflicting predictive patterns for different sets of relation types (multiple tasks). Our empirical results on real-world datasets demonstrate that our approach can effectively generalize to entirely new relation types in test, without access to additional information, yielding significant performance improvements over existing methods.

* 23 pages, 3 figures 
Viaarxiv icon

Graph Positional Encoding via Random Feature Propagation

Mar 08, 2023
Moshe Eliasof, Fabrizio Frasca, Beatrice Bevilacqua, Eran Treister, Gal Chechik, Haggai Maron

Figure 1 for Graph Positional Encoding via Random Feature Propagation
Figure 2 for Graph Positional Encoding via Random Feature Propagation
Figure 3 for Graph Positional Encoding via Random Feature Propagation
Figure 4 for Graph Positional Encoding via Random Feature Propagation

Two main families of node feature augmentation schemes have been explored for enhancing GNNs: random features and spectral positional encoding. Surprisingly, however, there is still no clear understanding of the relation between these two augmentation schemes. Here we propose a novel family of positional encoding schemes which draws a link between the above two approaches and improves over both. The new approach, named Random Feature Propagation (RFP), is inspired by the power iteration method and its generalizations. It concatenates several intermediate steps of an iterative algorithm for computing the dominant eigenvectors of a propagation matrix, starting from random node features. Notably, these propagation steps are based on graph-dependent propagation operators that can be either predefined or learned. We explore the theoretical and empirical benefits of RFP. First, we provide theoretical justifications for using random features, for incorporating early propagation steps, and for using multiple random initializations. Then, we empirically demonstrate that RFP significantly outperforms both spectral PE and random features in multiple node classification and graph classification benchmarks.

Viaarxiv icon

Neural Algorithmic Reasoning with Causal Regularisation

Feb 20, 2023
Beatrice Bevilacqua, Kyriacos Nikiforou, Borja Ibarz, Ioana Bica, Michela Paganini, Charles Blundell, Jovana Mitrovic, Petar Veličković

Figure 1 for Neural Algorithmic Reasoning with Causal Regularisation
Figure 2 for Neural Algorithmic Reasoning with Causal Regularisation
Figure 3 for Neural Algorithmic Reasoning with Causal Regularisation
Figure 4 for Neural Algorithmic Reasoning with Causal Regularisation

Recent work on neural algorithmic reasoning has investigated the reasoning capabilities of neural networks, effectively demonstrating they can learn to execute classical algorithms on unseen data coming from the train distribution. However, the performance of existing neural reasoners significantly degrades on out-of-distribution (OOD) test data, where inputs have larger sizes. In this work, we make an important observation: there are many \emph{different} inputs for which an algorithm will perform certain intermediate computations \emph{identically}. This insight allows us to develop data augmentation procedures that, given an algorithm's intermediate trajectory, produce inputs for which the target algorithm would have \emph{exactly} the same next trajectory step. Then, we employ a causal framework to design a corresponding self-supervised objective, and we prove that it improves the OOD generalisation capabilities of the reasoner. We evaluate our method on the CLRS algorithmic reasoning benchmark, where we show up to 3$\times$ improvements on the OOD test data.

* 16 pages, 7 figures 
Viaarxiv icon

Causal Lifting and Link Prediction

Feb 02, 2023
Leonardo Cotta, Beatrice Bevilacqua, Nesreen Ahmed, Bruno Ribeiro

Figure 1 for Causal Lifting and Link Prediction
Figure 2 for Causal Lifting and Link Prediction
Figure 3 for Causal Lifting and Link Prediction
Figure 4 for Causal Lifting and Link Prediction

Current state-of-the-art causal models for link prediction assume an underlying set of inherent node factors -- an innate characteristic defined at the node's birth -- that governs the causal evolution of links in the graph. In some causal tasks, however, link formation is path-dependent, i.e., the outcome of link interventions depends on existing links. For instance, in the customer-product graph of an online retailer, the effect of an 85-inch TV ad (treatment) likely depends on whether the costumer already has an 85-inch TV. Unfortunately, existing causal methods are impractical in these scenarios. The cascading functional dependencies between links (due to path dependence) are either unidentifiable or require an impractical number of control variables. In order to remedy this shortcoming, this work develops the first causal model capable of dealing with path dependencies in link prediction. It introduces the concept of causal lifting, an invariance in causal models that, when satisfied, allows the identification of causal link prediction queries using limited interventional data. On the estimation side, we show how structural pairwise embeddings -- a type of symmetry-based joint representation of node pairs in a graph -- exhibit lower bias and correctly represent the causal structure of the task, as opposed to existing node embedding methods, e.g., GNNs and matrix factorization. Finally, we validate our theoretical findings on four datasets under three different scenarios for causal link prediction tasks: knowledge base completion, covariance matrix estimation and consumer-product recommendations.

Viaarxiv icon

A Generalist Neural Algorithmic Learner

Sep 22, 2022
Borja Ibarz, Vitaly Kurin, George Papamakarios, Kyriacos Nikiforou, Mehdi Bennani, Róbert Csordás, Andrew Dudzik, Matko Bošnjak, Alex Vitvitskyi, Yulia Rubanova, Andreea Deac, Beatrice Bevilacqua, Yaroslav Ganin, Charles Blundell, Petar Veličković

Figure 1 for A Generalist Neural Algorithmic Learner
Figure 2 for A Generalist Neural Algorithmic Learner
Figure 3 for A Generalist Neural Algorithmic Learner
Figure 4 for A Generalist Neural Algorithmic Learner

The cornerstone of neural algorithmic reasoning is the ability to solve algorithmic tasks, especially in a way that generalises out of distribution. While recent years have seen a surge in methodological improvements in this area, they mostly focused on building specialist models. Specialist models are capable of learning to neurally execute either only one algorithm or a collection of algorithms with identical control-flow backbone. Here, instead, we focus on constructing a generalist neural algorithmic learner -- a single graph neural network processor capable of learning to execute a wide range of algorithms, such as sorting, searching, dynamic programming, path-finding and geometry. We leverage the CLRS benchmark to empirically show that, much like recent successes in the domain of perception, generalist algorithmic learners can be built by "incorporating" knowledge. That is, it is possible to effectively learn algorithms in a multi-task manner, so long as we can learn to execute them well in a single-task regime. Motivated by this, we present a series of improvements to the input representation, training regime and processor architecture over CLRS, improving average single-task performance by over 20% from prior art. We then conduct a thorough ablation of multi-task learners leveraging these improvements. Our results demonstrate a generalist learner that effectively incorporates knowledge captured by specialist models.

* 20 pages, 10 figures 
Viaarxiv icon

Understanding and Extending Subgraph GNNs by Rethinking Their Symmetries

Jun 22, 2022
Fabrizio Frasca, Beatrice Bevilacqua, Michael M. Bronstein, Haggai Maron

Figure 1 for Understanding and Extending Subgraph GNNs by Rethinking Their Symmetries
Figure 2 for Understanding and Extending Subgraph GNNs by Rethinking Their Symmetries
Figure 3 for Understanding and Extending Subgraph GNNs by Rethinking Their Symmetries
Figure 4 for Understanding and Extending Subgraph GNNs by Rethinking Their Symmetries

Subgraph GNNs are a recent class of expressive Graph Neural Networks (GNNs) which model graphs as collections of subgraphs. So far, the design space of possible Subgraph GNN architectures as well as their basic theoretical properties are still largely unexplored. In this paper, we study the most prominent form of subgraph methods, which employs node-based subgraph selection policies such as ego-networks or node marking and deletion. We address two central questions: (1) What is the upper-bound of the expressive power of these methods? and (2) What is the family of equivariant message passing layers on these sets of subgraphs?. Our first step in answering these questions is a novel symmetry analysis which shows that modelling the symmetries of node-based subgraph collections requires a significantly smaller symmetry group than the one adopted in previous works. This analysis is then used to establish a link between Subgraph GNNs and Invariant Graph Networks (IGNs). We answer the questions above by first bounding the expressive power of subgraph methods by 3-WL, and then proposing a general family of message-passing layers for subgraph methods that generalises all previous node-based Subgraph GNNs. Finally, we design a novel Subgraph GNN dubbed SUN, which theoretically unifies previous architectures while providing better empirical performance on multiple benchmarks.

* 46 pages, 6 figures 
Viaarxiv icon

Equivariant Subgraph Aggregation Networks

Oct 06, 2021
Beatrice Bevilacqua, Fabrizio Frasca, Derek Lim, Balasubramaniam Srinivasan, Chen Cai, Gopinath Balamurugan, Michael M. Bronstein, Haggai Maron

Figure 1 for Equivariant Subgraph Aggregation Networks
Figure 2 for Equivariant Subgraph Aggregation Networks
Figure 3 for Equivariant Subgraph Aggregation Networks
Figure 4 for Equivariant Subgraph Aggregation Networks

Message-passing neural networks (MPNNs) are the leading architecture for deep learning on graph-structured data, in large part due to their simplicity and scalability. Unfortunately, it was shown that these architectures are limited in their expressive power. This paper proposes a novel framework called Equivariant Subgraph Aggregation Networks (ESAN) to address this issue. Our main observation is that while two graphs may not be distinguishable by an MPNN, they often contain distinguishable subgraphs. Thus, we propose to represent each graph as a set of subgraphs derived by some predefined policy, and to process it using a suitable equivariant architecture. We develop novel variants of the 1-dimensional Weisfeiler-Leman (1-WL) test for graph isomorphism, and prove lower bounds on the expressiveness of ESAN in terms of these new WL variants. We further prove that our approach increases the expressive power of both MPNNs and more expressive architectures. Moreover, we provide theoretical results that describe how design choices such as the subgraph selection policy and equivariant neural architecture affect our architecture's expressive power. To deal with the increased computational cost, we propose a subgraph sampling scheme, which can be viewed as a stochastic version of our framework. A comprehensive set of experiments on real and synthetic datasets demonstrates that our framework improves the expressive power and overall performance of popular GNN architectures.

* 42 pages 
Viaarxiv icon

Size-Invariant Graph Representations for Graph Classification Extrapolations

Mar 08, 2021
Beatrice Bevilacqua, Yangze Zhou, Bruno Ribeiro

Figure 1 for Size-Invariant Graph Representations for Graph Classification Extrapolations
Figure 2 for Size-Invariant Graph Representations for Graph Classification Extrapolations
Figure 3 for Size-Invariant Graph Representations for Graph Classification Extrapolations
Figure 4 for Size-Invariant Graph Representations for Graph Classification Extrapolations

In general, graph representation learning methods assume that the test and train data come from the same distribution. In this work we consider an underexplored area of an otherwise rapidly developing field of graph representation learning: The task of out-of-distribution (OOD) graph classification, where train and test data have different distributions, with test data unavailable during training. Our work shows it is possible to use a causal model to learn approximately invariant representations that better extrapolate between train and test data. Finally, we conclude with synthetic and real-world dataset experiments showcasing the benefits of representations that are invariant to train/test distribution shifts.

Viaarxiv icon