Alert button
Picture for Mihai Cucuringu

Mihai Cucuringu

Alert button

Robust Angular Synchronization via Directed Graph Neural Networks

Oct 09, 2023
Yixuan He, Gesine Reinert, David Wipf, Mihai Cucuringu

The angular synchronization problem aims to accurately estimate (up to a constant additive phase) a set of unknown angles $\theta_1, \dots, \theta_n\in[0, 2\pi)$ from $m$ noisy measurements of their offsets $\theta_i-\theta_j \;\mbox{mod} \; 2\pi.$ Applications include, for example, sensor network localization, phase retrieval, and distributed clock synchronization. An extension of the problem to the heterogeneous setting (dubbed $k$-synchronization) is to estimate $k$ groups of angles simultaneously, given noisy observations (with unknown group assignment) from each group. Existing methods for angular synchronization usually perform poorly in high-noise regimes, which are common in applications. In this paper, we leverage neural networks for the angular synchronization problem, and its heterogeneous extension, by proposing GNNSync, a theoretically-grounded end-to-end trainable framework using directed graph neural networks. In addition, new loss functions are devised to encode synchronization objectives. Experimental results on extensive data sets demonstrate that GNNSync attains competitive, and often superior, performance against a comprehensive set of baselines for the angular synchronization problem and its extension, validating the robustness of GNNSync even at high noise levels.

Viaarxiv icon

Graph Neural Networks for Forecasting Multivariate Realized Volatility with Spillover Effects

Aug 01, 2023
Chao Zhang, Xingyue Pu, Mihai Cucuringu, Xiaowen Dong

We present a novel methodology for modeling and forecasting multivariate realized volatilities using customized graph neural networks to incorporate spillover effects across stocks. The proposed model offers the benefits of incorporating spillover effects from multi-hop neighbors, capturing nonlinear relationships, and flexible training with different loss functions. Our empirical findings provide compelling evidence that incorporating spillover effects from multi-hop neighbors alone does not yield a clear advantage in terms of predictive accuracy. However, modeling nonlinear spillover effects enhances the forecasting accuracy of realized volatilities, particularly for short-term horizons of up to one week. Moreover, our results consistently indicate that training with the Quasi-likelihood loss leads to substantial improvements in model performance compared to the commonly-used mean squared error. A comprehensive series of empirical evaluations in alternative settings confirm the robustness of our results.

* 8 figures, 5 tables 
Viaarxiv icon

SaGess: Sampling Graph Denoising Diffusion Model for Scalable Graph Generation

Jun 29, 2023
Stratis Limnios, Praveen Selvaraj, Mihai Cucuringu, Carsten Maple, Gesine Reinert, Andrew Elliott

Figure 1 for SaGess: Sampling Graph Denoising Diffusion Model for Scalable Graph Generation
Figure 2 for SaGess: Sampling Graph Denoising Diffusion Model for Scalable Graph Generation
Figure 3 for SaGess: Sampling Graph Denoising Diffusion Model for Scalable Graph Generation
Figure 4 for SaGess: Sampling Graph Denoising Diffusion Model for Scalable Graph Generation

Over recent years, denoising diffusion generative models have come to be considered as state-of-the-art methods for synthetic data generation, especially in the case of generating images. These approaches have also proved successful in other applications such as tabular and graph data generation. However, due to computational complexity, to this date, the application of these techniques to graph data has been restricted to small graphs, such as those used in molecular modeling. In this paper, we propose SaGess, a discrete denoising diffusion approach, which is able to generate large real-world networks by augmenting a diffusion model (DiGress) with a generalized divide-and-conquer framework. The algorithm is capable of generating larger graphs by sampling a covering of subgraphs of the initial graph in order to train DiGress. SaGess then constructs a synthetic graph using the subgraphs that have been generated by DiGress. We evaluate the quality of the synthetic data sets against several competitor methods by comparing graph statistics between the original and synthetic samples, as well as evaluating the utility of the synthetic data set produced by using it to train a task-driven model, namely link prediction. In our experiments, SaGess, outperforms most of the one-shot state-of-the-art graph generating methods by a significant factor, both on the graph metrics and on the link prediction task.

Viaarxiv icon

Robust Detection of Lead-Lag Relationships in Lagged Multi-Factor Models

May 11, 2023
Yichi Zhang, Mihai Cucuringu, Alexander Y. Shestopaloff, Stefan Zohren

Figure 1 for Robust Detection of Lead-Lag Relationships in Lagged Multi-Factor Models
Figure 2 for Robust Detection of Lead-Lag Relationships in Lagged Multi-Factor Models
Figure 3 for Robust Detection of Lead-Lag Relationships in Lagged Multi-Factor Models
Figure 4 for Robust Detection of Lead-Lag Relationships in Lagged Multi-Factor Models

In multivariate time series systems, key insights can be obtained by discovering lead-lag relationships inherent in the data, which refer to the dependence between two time series shifted in time relative to one another, and which can be leveraged for the purposes of control, forecasting or clustering. We develop a clustering-driven methodology for the robust detection of lead-lag relationships in lagged multi-factor models. Within our framework, the envisioned pipeline takes as input a set of time series, and creates an enlarged universe of extracted subsequence time series from each input time series, by using a sliding window approach. We then apply various clustering techniques (e.g, K-means++ and spectral clustering), employing a variety of pairwise similarity measures, including nonlinear ones. Once the clusters have been extracted, lead-lag estimates across clusters are aggregated to enhance the identification of the consistent relationships in the original universe. Since multivariate time series are ubiquitous in a wide range of domains, we demonstrate that our method is not only able to robustly detect lead-lag relationships in financial markets, but can also yield insightful results when applied to an environmental data set.

Viaarxiv icon

OFTER: An Online Pipeline for Time Series Forecasting

Apr 08, 2023
Nikolas Michael, Mihai Cucuringu, Sam Howison

Figure 1 for OFTER: An Online Pipeline for Time Series Forecasting
Figure 2 for OFTER: An Online Pipeline for Time Series Forecasting
Figure 3 for OFTER: An Online Pipeline for Time Series Forecasting
Figure 4 for OFTER: An Online Pipeline for Time Series Forecasting

We introduce OFTER, a time series forecasting pipeline tailored for mid-sized multivariate time series. OFTER utilizes the non-parametric models of k-nearest neighbors and Generalized Regression Neural Networks, integrated with a dimensionality reduction component. To circumvent the curse of dimensionality, we employ a weighted norm based on a modified version of the maximal correlation coefficient. The pipeline we introduce is specifically designed for online tasks, has an interpretable output, and is able to outperform several state-of-the art baselines. The computational efficacy of the algorithm, its online nature, and its ability to operate in low signal-to-noise regimes, render OFTER an ideal approach for financial multivariate time series problems, such as daily equity forecasting. Our work demonstrates that while deep learning models hold significant promise for time series forecasting, traditional methods carefully integrating mainstream tools remain very competitive alternatives with the added benefits of scalability and interpretability.

* 26 pages, 12 figures 
Viaarxiv icon

Symphony in the Latent Space: Provably Integrating High-dimensional Techniques with Non-linear Machine Learning Models

Dec 01, 2022
Qiong Wu, Jian Li, Zhenming Liu, Yanhua Li, Mihai Cucuringu

Figure 1 for Symphony in the Latent Space: Provably Integrating High-dimensional Techniques with Non-linear Machine Learning Models
Figure 2 for Symphony in the Latent Space: Provably Integrating High-dimensional Techniques with Non-linear Machine Learning Models
Figure 3 for Symphony in the Latent Space: Provably Integrating High-dimensional Techniques with Non-linear Machine Learning Models
Figure 4 for Symphony in the Latent Space: Provably Integrating High-dimensional Techniques with Non-linear Machine Learning Models

This paper revisits building machine learning algorithms that involve interactions between entities, such as those between financial assets in an actively managed portfolio, or interactions between users in a social network. Our goal is to forecast the future evolution of ensembles of multivariate time series in such applications (e.g., the future return of a financial asset or the future popularity of a Twitter account). Designing ML algorithms for such systems requires addressing the challenges of high-dimensional interactions and non-linearity. Existing approaches usually adopt an ad-hoc approach to integrating high-dimensional techniques into non-linear models and recent studies have shown these approaches have questionable efficacy in time-evolving interacting systems. To this end, we propose a novel framework, which we dub as the additive influence model. Under our modeling assumption, we show that it is possible to decouple the learning of high-dimensional interactions from the learning of non-linear feature interactions. To learn the high-dimensional interactions, we leverage kernel-based techniques, with provable guarantees, to embed the entities in a low-dimensional latent space. To learn the non-linear feature-response interactions, we generalize prominent machine learning techniques, including designing a new statistically sound non-parametric method and an ensemble learning algorithm optimized for vector regressions. Extensive experiments on two common applications demonstrate that our new algorithms deliver significantly stronger forecasting power compared to standard and recently proposed methods.

* Association for the Advancement of Artificial Intelligence 2023  
* 24 pages 
Viaarxiv icon

MSGNN: A Spectral Graph Neural Network Based on a Novel Magnetic Signed Laplacian

Sep 18, 2022
Yixuan He, Michael Permultter, Gesine Reinert, Mihai Cucuringu

Figure 1 for MSGNN: A Spectral Graph Neural Network Based on a Novel Magnetic Signed Laplacian
Figure 2 for MSGNN: A Spectral Graph Neural Network Based on a Novel Magnetic Signed Laplacian
Figure 3 for MSGNN: A Spectral Graph Neural Network Based on a Novel Magnetic Signed Laplacian
Figure 4 for MSGNN: A Spectral Graph Neural Network Based on a Novel Magnetic Signed Laplacian

Signed and directed networks are ubiquitous in real-world applications. However, there has been relatively little work proposing spectral graph neural networks (GNNs) for such networks. Here we introduce a signed directed Laplacian matrix, which we call the magnetic signed Laplacian, as a natural generalization of both the signed Laplacian on signed graphs and the magnetic Laplacian on directed graphs. We then use this matrix to construct a novel efficient spectral GNN architecture and conduct extensive experiments on both node clustering and link prediction tasks. In these experiments, we consider tasks related to signed information, tasks related to directional information, and tasks related to both signed and directional information. We demonstrate that our proposed spectral GNN is effective for incorporating both signed and directional information, and attains leading performance on a wide range of data sets. Additionally, we provide a novel synthetic network model, which we refer to as the signed directed stochastic block model, and a number of novel real-world data sets based on lead-lag relationships in financial time series.

Viaarxiv icon

Graph similarity learning for change-point detection in dynamic networks

Mar 29, 2022
Deborah Sulem, Henry Kenlay, Mihai Cucuringu, Xiaowen Dong

Figure 1 for Graph similarity learning for change-point detection in dynamic networks
Figure 2 for Graph similarity learning for change-point detection in dynamic networks
Figure 3 for Graph similarity learning for change-point detection in dynamic networks
Figure 4 for Graph similarity learning for change-point detection in dynamic networks

Dynamic networks are ubiquitous for modelling sequential graph-structured data, e.g., brain connectome, population flows and messages exchanges. In this work, we consider dynamic networks that are temporal sequences of graph snapshots, and aim at detecting abrupt changes in their structure. This task is often termed network change-point detection and has numerous applications, such as fraud detection or physical motion monitoring. Leveraging a graph neural network model, we design a method to perform online network change-point detection that can adapt to the specific network domain and localise changes with no delay. The main novelty of our method is to use a siamese graph neural network architecture for learning a data-driven graph similarity function, which allows to effectively compare the current graph and its recent history. Importantly, our method does not require prior knowledge on the network generative distribution and is agnostic to the type of change-points; moreover, it can be applied to a large variety of networks, that include for instance edge weights and node attributes. We show on synthetic and real data that our method enjoys a number of benefits: it is able to learn an adequate graph similarity function for performing online network change-point detection in diverse types of change-point settings, and requires a shorter data history to detect changes than most existing state-of-the-art baselines.

* 33 pages, 21 figures, 5 tables 
Viaarxiv icon

DAMNETS: A Deep Autoregressive Model for Generating Markovian Network Time Series

Mar 28, 2022
Jase Clarkson, Mihai Cucuringu, Andrew Elliott, Gesine Reinert

Figure 1 for DAMNETS: A Deep Autoregressive Model for Generating Markovian Network Time Series
Figure 2 for DAMNETS: A Deep Autoregressive Model for Generating Markovian Network Time Series
Figure 3 for DAMNETS: A Deep Autoregressive Model for Generating Markovian Network Time Series
Figure 4 for DAMNETS: A Deep Autoregressive Model for Generating Markovian Network Time Series

In this work, we introduce DAMNETS, a deep generative model for Markovian network time series. Time series of networks are found in many fields such as trade or payment networks in economics, contact networks in epidemiology or social media posts over time. Generative models of such data are useful for Monte-Carlo estimation and data set expansion, which is of interest for both data privacy and model fitting. Using recent ideas from the Graph Neural Network (GNN) literature, we introduce a novel GNN encoder-decoder structure in which an encoder GNN learns a latent representation of the input graph, and a decoder GNN uses this representation to simulate the network dynamics. We show using synthetic data sets that DAMNETS can replicate features of network topology across time observed in the real world, such as changing community structure and preferential attachment. DAMNETS outperforms competing methods on all of our measures of sample quality over several real and synthetic data sets.

* 12 pages, 10 figures, 2 tables 
Viaarxiv icon

PyTorch Geometric Signed Directed: A Survey and Software on Graph Neural Networks for Signed and Directed Graphs

Feb 22, 2022
Yixuan He, Xitong Zhang, Junjie Huang, Mihai Cucuringu, Gesine Reinert

Figure 1 for PyTorch Geometric Signed Directed: A Survey and Software on Graph Neural Networks for Signed and Directed Graphs
Figure 2 for PyTorch Geometric Signed Directed: A Survey and Software on Graph Neural Networks for Signed and Directed Graphs
Figure 3 for PyTorch Geometric Signed Directed: A Survey and Software on Graph Neural Networks for Signed and Directed Graphs
Figure 4 for PyTorch Geometric Signed Directed: A Survey and Software on Graph Neural Networks for Signed and Directed Graphs

Signed networks are ubiquitous in many real-world applications (e.g., social networks encoding trust/distrust relationships, correlation networks arising from time series data). While many signed networks are directed, there is a lack of survey papers and software packages on graph neural networks (GNNs) specially designed for directed networks. In this paper, we present PyTorch Geometric Signed Directed, a survey and software on GNNs for signed and directed networks. We review typical tasks, loss functions and evaluation metrics in the analysis of signed and directed networks, discuss data used in related experiments, and provide an overview of methods proposed. The deep learning framework consists of easy-to-use GNN models, synthetic and real-world data, as well as task-specific evaluation metrics and loss functions for signed and directed networks. The software is presented in a modular fashion, so that signed and directed networks can also be treated separately. As an extension library for PyTorch Geometric, our proposed software is maintained with open-source releases, detailed documentation, continuous integration, unit tests and code coverage checks. Our code is publicly available at \url{https://github.com/SherylHYX/pytorch_geometric_signed_directed}.

* 11 pages, 2 figures 
Viaarxiv icon