Alert button
Picture for Santiago Segarra

Santiago Segarra

Alert button

Power Allocation for Wireless Federated Learning using Graph Neural Networks

Nov 15, 2021
Boning Li, Ananthram Swami, Santiago Segarra

Figure 1 for Power Allocation for Wireless Federated Learning using Graph Neural Networks

We propose a data-driven approach for power allocation in the context of federated learning (FL) over interference-limited wireless networks. The power policy is designed to maximize the transmitted information during the FL process under communication constraints, with the ultimate objective of improving the accuracy and efficiency of the global FL model being trained. The proposed power allocation policy is parameterized using a graph convolutional network and the associated constrained optimization problem is solved through a primal-dual algorithm. Numerical experiments show that the proposed method outperforms three baseline methods in both transmission success rate and FL global performance.

Viaarxiv icon

Delay-Oriented Distributed Scheduling Using Graph Neural Networks

Nov 13, 2021
Zhongyuan Zhao, Gunjan Verma, Ananthram Swami, Santiago Segarra

Figure 1 for Delay-Oriented Distributed Scheduling Using Graph Neural Networks
Figure 2 for Delay-Oriented Distributed Scheduling Using Graph Neural Networks
Figure 3 for Delay-Oriented Distributed Scheduling Using Graph Neural Networks
Figure 4 for Delay-Oriented Distributed Scheduling Using Graph Neural Networks

In wireless multi-hop networks, delay is an important metric for many applications. However, the max-weight scheduling algorithms in the literature typically focus on instantaneous optimality, in which the schedule is selected by solving a maximum weighted independent set (MWIS) problem on the interference graph at each time slot. These myopic policies perform poorly in delay-oriented scheduling, in which the dependency between the current backlogs of the network and the schedule of the previous time slot needs to be considered. To address this issue, we propose a delay-oriented distributed scheduler based on graph convolutional networks (GCNs). In a nutshell, a trainable GCN module generates node embeddings that capture the network topology as well as multi-step lookahead backlogs, before calling a distributed greedy MWIS solver. In small- to medium-sized wireless networks with heterogeneous transmit power, where a few central links have many interfering neighbors, our proposed distributed scheduler can outperform the myopic schedulers based on greedy and instantaneously optimal MWIS solvers, with good generalizability across graph models and minimal increase in communication complexity.

* 5 pages, 6 figures, submitted to ICASSP 2022. arXiv admin note: text overlap with arXiv:2109.05536 
Viaarxiv icon

Stability Analysis of Unfolded WMMSE for Power Allocation

Oct 14, 2021
Arindam Chowdhury, Fernando Gama, Santiago Segarra

Figure 1 for Stability Analysis of Unfolded WMMSE for Power Allocation

Power allocation is one of the fundamental problems in wireless networks and a wide variety of algorithms address this problem from different perspectives. A common element among these algorithms is that they rely on an estimation of the channel state, which may be inaccurate on account of hardware defects, noisy feedback systems, and environmental and adversarial disturbances. Therefore, it is essential that the output power allocation of these algorithms is stable with respect to input perturbations, to the extent that the variations in the output are bounded for bounded variations in the input. In this paper, we focus on UWMMSE -- a modern algorithm leveraging graph neural networks --, and illustrate its stability to additive input perturbations of bounded energy through both theoretical analysis and empirical validation.

* Under review at IEEE ICASSP 2022 
Viaarxiv icon

Robust MIMO Detection using Hypernetworks with Learned Regularizers

Oct 13, 2021
Nicolas Zilberstein, Chris Dick, Rahman Doost-Mohammady, Ashutosh Sabharwal, Santiago Segarra

Figure 1 for Robust MIMO Detection using Hypernetworks with Learned Regularizers
Figure 2 for Robust MIMO Detection using Hypernetworks with Learned Regularizers
Figure 3 for Robust MIMO Detection using Hypernetworks with Learned Regularizers
Figure 4 for Robust MIMO Detection using Hypernetworks with Learned Regularizers

Optimal symbol detection in multiple-input multiple-output (MIMO) systems is known to be an NP-hard problem. Recently, there has been a growing interest to get reasonably close to the optimal solution using neural networks while keeping the computational complexity in check. However, existing work based on deep learning shows that it is difficult to design a generic network that works well for a variety of channels. In this work, we propose a method that tries to strike a balance between symbol error rate (SER) performance and generality of channels. Our method is based on hypernetworks that generate the parameters of a neural network-based detector that works well on a specific channel. We propose a general framework by regularizing the training of the hypernetwork with some pre-trained instances of the channel-specific method. Through numerical experiments, we show that our proposed method yields high performance for a set of prespecified channel realizations while generalizing well to all channels drawn from a specific distribution.

Viaarxiv icon

Label Propagation across Graphs: Node Classification using Graph Neural Tangent Kernels

Oct 07, 2021
Artun Bayer, Arindam Chowdhury, Santiago Segarra

Figure 1 for Label Propagation across Graphs: Node Classification using Graph Neural Tangent Kernels
Figure 2 for Label Propagation across Graphs: Node Classification using Graph Neural Tangent Kernels

Graph neural networks (GNNs) have achieved superior performance on node classification tasks in the last few years. Commonly, this is framed in a transductive semi-supervised learning setup wherein the entire graph, including the target nodes to be labeled, is available for training. Driven in part by scalability, recent works have focused on the inductive case where only the labeled portion of a graph is available for training. In this context, our current work considers a challenging inductive setting where a set of labeled graphs are available for training while the unlabeled target graph is completely separate, i.e., there are no connections between labeled and unlabeled nodes. Under the implicit assumption that the testing and training graphs come from similar distributions, our goal is to develop a labeling function that generalizes to unobserved connectivity structures. To that end, we employ a graph neural tangent kernel (GNTK) that corresponds to infinitely wide GNNs to find correspondences between nodes in different graphs based on both the topology and the node features. We augment the capabilities of the GNTK with residual connections and empirically illustrate its performance gains on standard benchmarks.

* Under review at IEEE ICASSP 2022 
Viaarxiv icon

Unrolling Particles: Unsupervised Learning of Sampling Distributions

Oct 06, 2021
Fernando Gama, Nicolas Zilberstein, Richard G. Baraniuk, Santiago Segarra

Figure 1 for Unrolling Particles: Unsupervised Learning of Sampling Distributions

Particle filtering is used to compute good nonlinear estimates of complex systems. It samples trajectories from a chosen distribution and computes the estimate as a weighted average. Easy-to-sample distributions often lead to degenerate samples where only one trajectory carries all the weight, negatively affecting the resulting performance of the estimate. While much research has been done on the design of appropriate sampling distributions that would lead to controlled degeneracy, in this paper our objective is to \emph{learn} sampling distributions. Leveraging the framework of algorithm unrolling, we model the sampling distribution as a multivariate normal, and we use neural networks to learn both the mean and the covariance. We carry out unsupervised training of the model to minimize weight degeneracy, relying only on the observed measurements of the system. We show in simulations that the resulting particle filter yields good estimates in a wide range of scenarios.

Viaarxiv icon

Joint inference of multiple graphs with hidden variables from stationary graph signals

Oct 05, 2021
Samuel Rey, Andrei Buciulea, Madeline Navarro, Santiago Segarra, Antonio G. Marques

Figure 1 for Joint inference of multiple graphs with hidden variables from stationary graph signals

Learning graphs from sets of nodal observations represents a prominent problem formally known as graph topology inference. However, current approaches are limited by typically focusing on inferring single networks, and they assume that observations from all nodes are available. First, many contemporary setups involve multiple related networks, and second, it is often the case that only a subset of nodes is observed while the rest remain hidden. Motivated by these facts, we introduce a joint graph topology inference method that models the influence of the hidden variables. Under the assumptions that the observed signals are stationary on the sought graphs and the graphs are closely related, the joint estimation of multiple networks allows us to exploit such relationships to improve the quality of the learned graphs. Moreover, we confront the challenging problem of modeling the influence of the hidden nodes to minimize their detrimental effect. To obtain an amenable approach, we take advantage of the particular structure of the setup at hand and leverage the similarity between the different graphs, which affects both the observed and the hidden nodes. To test the proposed method, numerical simulations over synthetic and real-world graphs are provided.

* Paper submitted to ICASSP 2022 
Viaarxiv icon

A Robust Alternative for Graph Convolutional Neural Networks via Graph Neighborhood Filters

Oct 02, 2021
Victor M. Tenorio, Samuel Rey, Fernando Gama, Santiago Segarra, Antonio G. Marques

Figure 1 for A Robust Alternative for Graph Convolutional Neural Networks via Graph Neighborhood Filters
Figure 2 for A Robust Alternative for Graph Convolutional Neural Networks via Graph Neighborhood Filters
Figure 3 for A Robust Alternative for Graph Convolutional Neural Networks via Graph Neighborhood Filters
Figure 4 for A Robust Alternative for Graph Convolutional Neural Networks via Graph Neighborhood Filters

Graph convolutional neural networks (GCNNs) are popular deep learning architectures that, upon replacing regular convolutions with graph filters (GFs), generalize CNNs to irregular domains. However, classical GFs are prone to numerical errors since they consist of high-order polynomials. This problem is aggravated when several filters are applied in cascade, limiting the practical depth of GCNNs. To tackle this issue, we present the neighborhood graph filters (NGFs), a family of GFs that replaces the powers of the graph shift operator with $k$-hop neighborhood adjacency matrices. NGFs help to alleviate the numerical issues of traditional GFs, allow for the design of deeper GCNNs, and enhance the robustness to errors in the topology of the graph. To illustrate the advantage over traditional GFs in practical applications, we use NGFs in the design of deep neighborhood GCNNs to solve graph signal denoising and node classification problems over both synthetic and real-world data.

* Presented in the 2021 Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 31 Oct. -- 3 Nov. 2021 
Viaarxiv icon

Untrained Graph Neural Networks for Denoising

Sep 24, 2021
Samuel Rey, Santiago Segarra, Reinhard Heckel, Antonio G. Marques

Figure 1 for Untrained Graph Neural Networks for Denoising
Figure 2 for Untrained Graph Neural Networks for Denoising
Figure 3 for Untrained Graph Neural Networks for Denoising
Figure 4 for Untrained Graph Neural Networks for Denoising

A fundamental problem in signal processing is to denoise a signal. While there are many well-performing methods for denoising signals defined on regular supports, such as images defined on two-dimensional grids of pixels, many important classes of signals are defined over irregular domains such as graphs. This paper introduces two untrained graph neural network architectures for graph signal denoising, provides theoretical guarantees for their denoising capabilities in a simple setup, and numerically validates the theoretical results in more general scenarios. The two architectures differ on how they incorporate the information encoded in the graph, with one relying on graph convolutions and the other employing graph upsampling operators based on hierarchical clustering. Each architecture implements a different prior over the targeted signals. To numerically illustrate the validity of the theoretical results and to compare the performance of the proposed architectures with other denoising alternatives, we present several experimental results with real and synthetic datasets.

Viaarxiv icon

Hodgelets: Localized Spectral Representations of Flows on Simplicial Complexes

Sep 17, 2021
T. Mitchell Roddenberry, Florian Frantzen, Michael T. Schaub, Santiago Segarra

Figure 1 for Hodgelets: Localized Spectral Representations of Flows on Simplicial Complexes
Figure 2 for Hodgelets: Localized Spectral Representations of Flows on Simplicial Complexes

We develop wavelet representations for edge-flows on simplicial complexes, using ideas rooted in combinatorial Hodge theory and spectral graph wavelets. We first show that the Hodge Laplacian can be used in lieu of the graph Laplacian to construct a family of wavelets for higher-order signals on simplicial complexes. Then, we refine this idea to construct wavelets that respect the Hodge-Helmholtz decomposition. For these Hodgelets, familiar notions of curl-free and divergence-free flows from vector calculus are preserved. We characterize the representational quality of our Hodgelets for edge flows in terms of frame bounds and demonstrate the use of these spectral wavelets for sparse representation of edge flows on real and synthetic data.

Viaarxiv icon