Alert button
Picture for Antonio G. Marques

Antonio G. Marques

Alert button

Recovering Missing Node Features with Local Structure-based Embeddings

Sep 16, 2023
Victor M. Tenorio, Madeline Navarro, Santiago Segarra, Antonio G. Marques

Node features bolster graph-based learning when exploited jointly with network structure. However, a lack of nodal attributes is prevalent in graph data. We present a framework to recover completely missing node features for a set of graphs, where we only know the signals of a subset of graphs. Our approach incorporates prior information from both graph topology and existing nodal values. We demonstrate an example implementation of our framework where we assume that node features depend on local graph structure. Missing nodal values are estimated by aggregating known features from the most similar nodes. Similarity is measured through a node embedding space that preserves local topological features, which we train using a Graph AutoEncoder. We empirically show not only the accuracy of our feature estimation approach but also its value for downstream graph classification. Our success embarks on and implies the need to emphasize the relationship between node features and graph structure in graph-based learning.

* Submitted to 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2024) 
Viaarxiv icon

Blind Deconvolution of Sparse Graph Signals in the Presence of Perturbations

Sep 16, 2023
Victor M. Tenorio, Samuel Rey, Antonio G. Marques

Blind deconvolution over graphs involves using (observed) output graph signals to obtain both the inputs (sources) as well as the filter that drives (models) the graph diffusion process. This is an ill-posed problem that requires additional assumptions, such as the sources being sparse, to be solvable. This paper addresses the blind deconvolution problem in the presence of imperfect graph information, where the observed graph is a perturbed version of the (unknown) true graph. While not having perfect knowledge of the graph is arguably more the norm than the exception, the body of literature on this topic is relatively small. This is partly due to the fact that translating the uncertainty about the graph topology to standard graph signal processing tools (e.g. eigenvectors or polynomials of the graph) is a challenging endeavor. To address this limitation, we propose an optimization-based estimator that solves the blind identification in the vertex domain, aims at estimating the inverse of the generating filter, and accounts explicitly for additive graph perturbations. Preliminary numerical experiments showcase the effectiveness and potential of the proposed algorithm.

* Submitted to the 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2024) 
Viaarxiv icon

Joint Network Topology Inference in the Presence of Hidden Nodes

Jun 30, 2023
Madeline Navarro, Samuel Rey, Andrei Buciulea, Antonio G. Marques, Santiago Segarra

Figure 1 for Joint Network Topology Inference in the Presence of Hidden Nodes
Figure 2 for Joint Network Topology Inference in the Presence of Hidden Nodes

We investigate the increasingly prominent task of jointly inferring multiple networks from nodal observations. While most joint inference methods assume that observations are available at all nodes, we consider the realistic and more difficult scenario where a subset of nodes are hidden and cannot be measured. Under the assumptions that the partially observed nodal signals are graph stationary and the networks have similar connectivity patterns, we derive structural characteristics of the connectivity between hidden and observed nodes. This allows us to formulate an optimization problem for estimating networks while accounting for the influence of hidden nodes. We identify conditions under which a convex relaxation yields the sparsest solution, and we formalize the performance of our proposed optimization problem with respect to the effect of the hidden nodes. Finally, synthetic and real-world simulations provide evaluations of our method in comparison with other baselines.

Viaarxiv icon

Graph Signal Processing: History, Development, Impact, and Outlook

Mar 21, 2023
Geert Leus, Antonio G. Marques, José M. F. Moura, Antonio Ortega, David I Shuman

Figure 1 for Graph Signal Processing: History, Development, Impact, and Outlook
Figure 2 for Graph Signal Processing: History, Development, Impact, and Outlook

Graph signal processing (GSP) generalizes signal processing (SP) tasks to signals living on non-Euclidean domains whose structure can be captured by a weighted graph. Graphs are versatile, able to model irregular interactions, easy to interpret, and endowed with a corpus of mathematical results, rendering them natural candidates to serve as the basis for a theory of processing signals in more irregular domains. In this article, we provide an overview of the evolution of GSP, from its origins to the challenges ahead. The first half is devoted to reviewing the history of GSP and explaining how it gave rise to an encompassing framework that shares multiple similarities with SP. A key message is that GSP has been critical to develop novel and technically sound tools, theory, and algorithms that, by leveraging analogies with and the insights of digital SP, provide new ways to analyze, process, and learn from graph signals. In the second half, we shift focus to review the impact of GSP on other disciplines. First, we look at the use of GSP in data science problems, including graph learning and graph-based deep learning. Second, we discuss the impact of GSP on applications, including neuroscience and image and video processing. We conclude with a brief discussion of the emerging and future directions of GSP.

Viaarxiv icon

Graph Learning from Gaussian and Stationary Graph Signals

Mar 13, 2023
Andrei Buciulea, Antonio G. Marques

Figure 1 for Graph Learning from Gaussian and Stationary Graph Signals

Graphs have become pervasive tools to represent information and datasets with irregular support. However, in many cases, the underlying graph is either unavailable or naively obtained, calling for more advanced methods to its estimation. Indeed, graph topology inference methods that estimate the network structure from a set of signal observations have a long and well established history. By assuming that the observations are both Gaussian and stationary in the sought graph, this paper proposes a new scheme to learn the network from nodal observations. Consideration of graph stationarity overcomes some of the limitations of the classical Graphical Lasso algorithm, which is constrained to a more specific class of graphical models. On the other hand, Gaussianity allows us to regularize the estimation, requiring less samples than in existing graph stationarity-based approaches. While the resultant estimation (optimization) problem is more complex and non-convex, we design an alternating convex approach able to find a stationary solution. Numerical tests with synthetic and real data are presented, and the performance of our approach is compared with existing alternatives.

Viaarxiv icon

Joint graph learning from Gaussian observations in the presence of hidden nodes

Dec 04, 2022
Samuel Rey, Madeline Navarro, Andrei Buciulea, Santiago Segarra, Antonio G. Marques

Figure 1 for Joint graph learning from Gaussian observations in the presence of hidden nodes

Graph learning problems are typically approached by focusing on learning the topology of a single graph when signals from all nodes are available. However, many contemporary setups involve multiple related networks and, moreover, it is often the case that only a subset of nodes is observed while the rest remain hidden. Motivated by this, we propose a joint graph learning method that takes into account the presence of hidden (latent) variables. Intuitively, the presence of the hidden nodes renders the inference task ill-posed and challenging to solve, so we overcome this detrimental influence by harnessing the similarity of the estimated graphs. To that end, we assume that the observed signals are drawn from a Gaussian Markov random field with latent variables and we carefully model the graph similarity among hidden (latent) nodes. Then, we exploit the structure resulting from the previous considerations to propose a convex optimization problem that solves the joint graph learning task by providing a regularized maximum likelihood estimator. Finally, we compare the proposed algorithm with different baselines and evaluate its performance over synthetic and real-world graphs.

* This paper has been accepted in 2022 Asilomar Conference on Signals, Systems, and Computers 
Viaarxiv icon

Robust Graph Filter Identification and Graph Denoising from Signal Observations

Oct 16, 2022
Samuel Rey, Victor M. Tenorio, Antonio G. Marques

Figure 1 for Robust Graph Filter Identification and Graph Denoising from Signal Observations
Figure 2 for Robust Graph Filter Identification and Graph Denoising from Signal Observations
Figure 3 for Robust Graph Filter Identification and Graph Denoising from Signal Observations
Figure 4 for Robust Graph Filter Identification and Graph Denoising from Signal Observations

When facing graph signal processing tasks, the workhorse assumption is that the graph describing the support of the signals is known. However, in many relevant applications the available graph suffers from observation errors and perturbations. As a result, any method relying on the graph topology may yield suboptimal results if those imperfections are ignored. Motivated by this, we propose a novel approach for handling perturbations on the links of the graph and apply it to the problem of robust graph filter (GF) identification from input-output observations. Different from existing works, we formulate a non-convex optimization problem that operates in the vertex domain and jointly performs GF identification and graph denoising. As a result, on top of learning the desired GF, an estimate of the graph is obtained as a byproduct. To handle the resulting bi-convex problem, we design an algorithm that blends techniques from alternating optimization and majorization minimization, showing its convergence to a stationary point. The second part of the paper i) generalizes the design to a robust setup where several GFs are jointly estimated, and ii) introduces an alternative algorithmic implementation that reduces the computational complexity. Finally, the detrimental influence of the perturbations and the benefits resulting from the robust approach are numerically analyzed over synthetic and real-world datasets, comparing them with other state-of-the-art alternatives.

* Currently under review for publication in the IEEE Transactions on Signal Processing journal 
Viaarxiv icon

Enhanced graph-learning schemes driven by similar distributions of motifs

Jul 11, 2022
Samuel Rey, T. Mitchell Roddenberry, Santiago Segarra, Antonio G. Marques

Figure 1 for Enhanced graph-learning schemes driven by similar distributions of motifs
Figure 2 for Enhanced graph-learning schemes driven by similar distributions of motifs
Figure 3 for Enhanced graph-learning schemes driven by similar distributions of motifs
Figure 4 for Enhanced graph-learning schemes driven by similar distributions of motifs

This paper looks at the task of network topology inference, where the goal is to learn an unknown graph from nodal observations. One of the novelties of the approach put forth is the consideration of prior information about the density of motifs of the unknown graph to enhance the inference of classical Gaussian graphical models. Dealing with the density of motifs directly constitutes a challenging combinatorial task. However, we note that if two graphs have similar motif densities, one can show that the expected value of a polynomial applied to their empirical spectral distributions will be similar. Guided by this, we first assume that we have a reference graph that is related to the sought graph (in the sense of having similar motif densities) and then, we exploit this relation by incorporating a similarity constraint and a regularization term in the network topology inference optimization problem. The (non-)convexity of the optimization problem is discussed and a computational efficient alternating majorization-minimization algorithm is designed. We assess the performance of the proposed method through exhaustive numerical experiments where different constraints are considered and compared against popular baselines algorithms on both synthetic and real-world datasets.

Viaarxiv icon

Tensor and Matrix Low-Rank Value-Function Approximation in Reinforcement Learning

Jan 21, 2022
Sergio Rozada, Antonio G. Marques

Figure 1 for Tensor and Matrix Low-Rank Value-Function Approximation in Reinforcement Learning
Figure 2 for Tensor and Matrix Low-Rank Value-Function Approximation in Reinforcement Learning
Figure 3 for Tensor and Matrix Low-Rank Value-Function Approximation in Reinforcement Learning
Figure 4 for Tensor and Matrix Low-Rank Value-Function Approximation in Reinforcement Learning

Value-function (VF) approximation is a central problem in Reinforcement Learning (RL). Classical non-parametric VF estimation suffers from the curse of dimensionality. As a result, parsimonious parametric models have been adopted to approximate VFs in high-dimensional spaces, with most efforts being focused on linear and neural-network-based approaches. Differently, this paper puts forth a a parsimonious non-parametric approach, where we use stochastic low-rank algorithms to estimate the VF matrix in an online and model-free fashion. Furthermore, as VFs tend to be multi-dimensional, we propose replacing the classical VF matrix representation with a tensor (multi-way array) representation and, then, use the PARAFAC decomposition to design an online model-free tensor low-rank algorithm. Different versions of the algorithms are proposed, their complexity is analyzed, and their performance is assessed numerically using standardized RL environments.

* 12 pages, 6 figures, 1 table 
Viaarxiv icon

Learning Graphs from Smooth and Graph-Stationary Signals with Hidden Variables

Nov 10, 2021
Andrei Buciulea, Samuel Rey, Antonio G. Marques

Figure 1 for Learning Graphs from Smooth and Graph-Stationary Signals with Hidden Variables
Figure 2 for Learning Graphs from Smooth and Graph-Stationary Signals with Hidden Variables
Figure 3 for Learning Graphs from Smooth and Graph-Stationary Signals with Hidden Variables
Figure 4 for Learning Graphs from Smooth and Graph-Stationary Signals with Hidden Variables

Network-topology inference from (vertex) signal observations is a prominent problem across data-science and engineering disciplines. Most existing schemes assume that observations from all nodes are available, but in many practical environments, only a subset of nodes is accessible. A natural (and sometimes effective) approach is to disregard the role of unobserved nodes, but this ignores latent network effects, deteriorating the quality of the estimated graph. Differently, this paper investigates the problem of inferring the topology of a network from nodal observations while taking into account the presence of hidden (latent) variables. Our schemes assume the number of observed nodes is considerably larger than the number of hidden variables and build on recent graph signal processing models to relate the signals and the underlying graph. Specifically, we go beyond classical correlation and partial correlation approaches and assume that the signals are smooth and/or stationary in the sought graph. The assumptions are codified into different constrained optimization problems, with the presence of hidden variables being explicitly taken into account. Since the resulting problems are ill-conditioned and non-convex, the block matrix structure of the proposed formulations is leveraged and suitable convex-regularized relaxations are presented. Numerical experiments over synthetic and real-world datasets showcase the performance of the developed methods and compare them with existing alternatives.

Viaarxiv icon