Alert button
Picture for Santiago Segarra

Santiago Segarra

Alert button

Learning to Transmit with Provable Guarantees in Wireless Federated Learning

Apr 18, 2023
Boning Li, Jake Perazzone, Ananthram Swami, Santiago Segarra

Figure 1 for Learning to Transmit with Provable Guarantees in Wireless Federated Learning
Figure 2 for Learning to Transmit with Provable Guarantees in Wireless Federated Learning
Figure 3 for Learning to Transmit with Provable Guarantees in Wireless Federated Learning
Figure 4 for Learning to Transmit with Provable Guarantees in Wireless Federated Learning

We propose a novel data-driven approach to allocate transmit power for federated learning (FL) over interference-limited wireless networks. The proposed method is useful in challenging scenarios where the wireless channel is changing during the FL training process and when the training data are not independent and identically distributed (non-i.i.d.) on the local devices. Intuitively, the power policy is designed to optimize the information received at the server end during the FL process under communication constraints. Ultimately, our goal is to improve the accuracy and efficiency of the global FL model being trained. The proposed power allocation policy is parameterized using a graph convolutional network and the associated constrained optimization problem is solved through a primal-dual (PD) algorithm. Theoretically, we show that the formulated problem has zero duality gap and, once the power policy is parameterized, optimality depends on how expressive this parameterization is. Numerically, we demonstrate that the proposed method outperforms existing baselines under different wireless channel settings and varying degrees of data heterogeneity.

Viaarxiv icon

Deep Graph Unfolding for Beamforming in MU-MIMO Interference Networks

Apr 02, 2023
Arindam Chowdhury, Gunjan Verma, Ananthram Swami, Santiago Segarra

Figure 1 for Deep Graph Unfolding for Beamforming in MU-MIMO Interference Networks
Figure 2 for Deep Graph Unfolding for Beamforming in MU-MIMO Interference Networks
Figure 3 for Deep Graph Unfolding for Beamforming in MU-MIMO Interference Networks
Figure 4 for Deep Graph Unfolding for Beamforming in MU-MIMO Interference Networks

We develop an efficient and near-optimal solution for beamforming in multi-user multiple-input-multiple-output single-hop wireless ad-hoc interference networks. Inspired by the weighted minimum mean squared error (WMMSE) method, a classical approach to solving this problem, and the principle of algorithm unfolding, we present unfolded WMMSE (UWMMSE) for MU-MIMO. This method learns a parameterized functional transformation of key WMMSE parameters using graph neural networks (GNNs), where the channel and interference components of a wireless network constitute the underlying graph. These GNNs are trained through gradient descent on a network utility metric using multiple instances of the beamforming problem. Comprehensive experimental analyses illustrate the superiority of UWMMSE over the classical WMMSE and state-of-the-art learning-based methods in terms of performance, generalizability, and robustness.

* Under review at IEEE Trans. in Wireless Comm 
Viaarxiv icon

Signal Processing on Product Spaces

Mar 18, 2023
T. Mitchell Roddenberry, Vincent P. Grande, Florian Frantzen, Michael T. Schaub, Santiago Segarra

Figure 1 for Signal Processing on Product Spaces
Figure 2 for Signal Processing on Product Spaces
Figure 3 for Signal Processing on Product Spaces

We establish a framework for signal processing on product spaces of simplicial and cellular complexes. For simplicity, we focus on the product of two complexes representing time and space, although our results generalize naturally to products of simplicial complexes of arbitrary dimension. Our framework leverages the structure of the eigenmodes of the Hodge Laplacian of the product space to jointly filter along time and space. To this end, we provide a decomposition theorem of the Hodge Laplacian of the product space, which highlights how the product structure induces a decomposition of each eigenmode into a spatial and temporal component. Finally, we apply our method to real world data, specifically for interpolating trajectories of buoys in the ocean from a limited set of observed trajectories.

Viaarxiv icon

Windowed Fourier Analysis for Signal Processing on Graph Bundles

Feb 11, 2023
T. Mitchell Roddenberry, Santiago Segarra

Figure 1 for Windowed Fourier Analysis for Signal Processing on Graph Bundles
Figure 2 for Windowed Fourier Analysis for Signal Processing on Graph Bundles
Figure 3 for Windowed Fourier Analysis for Signal Processing on Graph Bundles
Figure 4 for Windowed Fourier Analysis for Signal Processing on Graph Bundles

We consider the task of representing signals supported on graph bundles, which are generalizations of product graphs that allow for "twists" in the product structure. Leveraging the localized product structure of a graph bundle, we demonstrate how a suitable partition of unity over the base graph can be used to lift the signal on the graph into a space where a product factorization can be readily applied. Motivated by the locality of this procedure, we demonstrate that bases for the signal spaces of the components of the graph bundle can be lifted in the same way, yielding a basis for the signal space of the total graph. We demonstrate this construction on synthetic graphs, as well as with an analysis of the energy landscape of conformational manifolds in stereochemistry.

Viaarxiv icon

Unsupervised Learning of Sampling Distributions for Particle Filters

Feb 02, 2023
Fernando Gama, Nicolas Zilberstein, Martin Sevilla, Richard Baraniuk, Santiago Segarra

Figure 1 for Unsupervised Learning of Sampling Distributions for Particle Filters
Figure 2 for Unsupervised Learning of Sampling Distributions for Particle Filters
Figure 3 for Unsupervised Learning of Sampling Distributions for Particle Filters
Figure 4 for Unsupervised Learning of Sampling Distributions for Particle Filters

Accurate estimation of the states of a nonlinear dynamical system is crucial for their design, synthesis, and analysis. Particle filters are estimators constructed by simulating trajectories from a sampling distribution and averaging them based on their importance weight. For particle filters to be computationally tractable, it must be feasible to simulate the trajectories by drawing from the sampling distribution. Simultaneously, these trajectories need to reflect the reality of the nonlinear dynamical system so that the resulting estimators are accurate. Thus, the crux of particle filters lies in designing sampling distributions that are both easy to sample from and lead to accurate estimators. In this work, we propose to learn the sampling distributions. We put forward four methods for learning sampling distributions from observed measurements. Three of the methods are parametric methods in which we learn the mean and covariance matrix of a multivariate Gaussian distribution; each methods exploits a different aspect of the data (generic, time structure, graph structure). The fourth method is a nonparametric alternative in which we directly learn a transform of a uniform random variable. All four methods are trained in an unsupervised manner by maximizing the likelihood that the states may have produced the observed measurements. Our computational experiments demonstrate that learned sampling distributions exhibit better performance than designed, minimum-degeneracy sampling distributions.

Viaarxiv icon

Joint graph learning from Gaussian observations in the presence of hidden nodes

Dec 04, 2022
Samuel Rey, Madeline Navarro, Andrei Buciulea, Santiago Segarra, Antonio G. Marques

Figure 1 for Joint graph learning from Gaussian observations in the presence of hidden nodes

Graph learning problems are typically approached by focusing on learning the topology of a single graph when signals from all nodes are available. However, many contemporary setups involve multiple related networks and, moreover, it is often the case that only a subset of nodes is observed while the rest remain hidden. Motivated by this, we propose a joint graph learning method that takes into account the presence of hidden (latent) variables. Intuitively, the presence of the hidden nodes renders the inference task ill-posed and challenging to solve, so we overcome this detrimental influence by harnessing the similarity of the estimated graphs. To that end, we assume that the observed signals are drawn from a Gaussian Markov random field with latent variables and we carefully model the graph similarity among hidden (latent) nodes. Then, we exploit the structure resulting from the previous considerations to propose a convex optimization problem that solves the joint graph learning task by providing a regularized maximum likelihood estimator. Finally, we compare the proposed algorithm with different baselines and evaluate its performance over synthetic and real-world graphs.

* This paper has been accepted in 2022 Asilomar Conference on Signals, Systems, and Computers 
Viaarxiv icon

Delay-aware Backpressure Routing Using Graph Neural Networks

Nov 19, 2022
Zhongyuan Zhao, Bojan Radojicic, Gunjan Verma, Ananthram Swami, Santiago Segarra

Figure 1 for Delay-aware Backpressure Routing Using Graph Neural Networks
Figure 2 for Delay-aware Backpressure Routing Using Graph Neural Networks

We propose a throughput-optimal biased backpressure (BP) algorithm for routing, where the bias is learned through a graph neural network that seeks to minimize end-to-end delay. Classical BP routing provides a simple yet powerful distributed solution for resource allocation in wireless multi-hop networks but has poor delay performance. A low-cost approach to improve this delay performance is to favor shorter paths by incorporating pre-defined biases in the BP computation, such as a bias based on the shortest path (hop) distance to the destination. In this work, we improve upon the widely-used metric of hop distance (and its variants) for the shortest path bias by introducing a bias based on the link duty cycle, which we predict using a graph convolutional neural network. Numerical results show that our approach can improve the delay performance compared to classical BP and existing BP alternatives based on pre-defined bias while being adaptive to interference density. In terms of complexity, our distributed implementation only introduces a one-time overhead (linear in the number of devices in the network) compared to classical BP, and a constant overhead compared to the lowest-complexity existing bias-based BP algorithms.

* 5 pages, 5 figures, submitted to IEEE ICASSP 2023 
Viaarxiv icon

Graph Filters for Signal Processing and Machine Learning on Graphs

Nov 16, 2022
Elvin Isufi, Fernando Gama, David I. Shuman, Santiago Segarra

Figure 1 for Graph Filters for Signal Processing and Machine Learning on Graphs
Figure 2 for Graph Filters for Signal Processing and Machine Learning on Graphs
Figure 3 for Graph Filters for Signal Processing and Machine Learning on Graphs
Figure 4 for Graph Filters for Signal Processing and Machine Learning on Graphs

Filters are fundamental in extracting information from data. For time series and image data that reside on Euclidean domains, filters are the crux of many signal processing and machine learning techniques, including convolutional neural networks. Increasingly, modern data also reside on networks and other irregular domains whose structure is better captured by a graph. To process and learn from such data, graph filters account for the structure of the underlying data domain. In this article, we provide a comprehensive overview of graph filters, including the different filtering categories, design strategies for each type, and trade-offs between different types of graph filters. We discuss how to extend graph filters into filter banks and graph neural networks to enhance the representational power; that is, to model a broader variety of signal classes, data patterns, and relationships. We also showcase the fundamental role of graph filters in signal processing and machine learning applications. Our aim is that this article serves the dual purpose of providing a unifying framework for both beginner and experienced researchers, as well as a common understanding that promotes collaborations between signal processing, machine learning, and application domains.

Viaarxiv icon

Neural multi-event forecasting on spatio-temporal point processes using probabilistically enriched transformers

Nov 05, 2022
Negar Erfanian, Santiago Segarra, Maarten de Hoop

Figure 1 for Neural multi-event forecasting on spatio-temporal point processes using probabilistically enriched transformers
Figure 2 for Neural multi-event forecasting on spatio-temporal point processes using probabilistically enriched transformers
Figure 3 for Neural multi-event forecasting on spatio-temporal point processes using probabilistically enriched transformers
Figure 4 for Neural multi-event forecasting on spatio-temporal point processes using probabilistically enriched transformers

Predicting discrete events in time and space has many scientific applications, such as predicting hazardous earthquakes and outbreaks of infectious diseases. History-dependent spatio-temporal Hawkes processes are often used to mathematically model these point events. However, previous approaches have faced numerous challenges, particularly when attempting to forecast one or multiple future events. In this work, we propose a new neural architecture for multi-event forecasting of spatio-temporal point processes, utilizing transformers, augmented with normalizing flows and probabilistic layers. Our network makes batched predictions of complex history-dependent spatio-temporal distributions of future discrete events, achieving state-of-the-art performance on a variety of benchmark datasets including the South California Earthquakes, Citibike, Covid-19, and Hawkes synthetic pinwheel datasets. More generally, we illustrate how our network can be applied to any dataset of discrete events with associated markers, even when no underlying physics is known.

* Submitted to ICLR2023 
Viaarxiv icon

GraphMAD: Graph Mixup for Data Augmentation using Data-Driven Convex Clustering

Oct 27, 2022
Madeline Navarro, Santiago Segarra

Figure 1 for GraphMAD: Graph Mixup for Data Augmentation using Data-Driven Convex Clustering
Figure 2 for GraphMAD: Graph Mixup for Data Augmentation using Data-Driven Convex Clustering
Figure 3 for GraphMAD: Graph Mixup for Data Augmentation using Data-Driven Convex Clustering
Figure 4 for GraphMAD: Graph Mixup for Data Augmentation using Data-Driven Convex Clustering

We develop a novel data-driven nonlinear mixup mechanism for graph data augmentation and present different mixup functions for sample pairs and their labels. Mixup is a data augmentation method to create new training data by linearly interpolating between pairs of data samples and their labels. Mixup of graph data is challenging since the interpolation between graphs of potentially different sizes is an ill-posed operation. Hence, a promising approach for graph mixup is to first project the graphs onto a common latent feature space and then explore linear and nonlinear mixup strategies in this latent space. In this context, we propose to (i) project graphs onto the latent space of continuous random graph models known as graphons, (ii) leverage convex clustering in this latent space to generate nonlinear data-driven mixup functions, and (iii) investigate the use of different mixup functions for labels and data samples. We evaluate our graph data augmentation performance on benchmark datasets and demonstrate that nonlinear data-driven mixup functions can significantly improve graph classification.

* 5 pages, 2 figures, 2 tables, submitted to ICASSP 2023 
Viaarxiv icon