Alert button
Picture for Mung Chiang

Mung Chiang

Alert button

Asynchronous Multi-Model Federated Learning over Wireless Networks: Theory, Modeling, and Optimization

May 22, 2023
Zhan-Lun Chang, Seyyedali Hosseinalipour, Mung Chiang, Christopher G. Brinton

Figure 1 for Asynchronous Multi-Model Federated Learning over Wireless Networks: Theory, Modeling, and Optimization
Figure 2 for Asynchronous Multi-Model Federated Learning over Wireless Networks: Theory, Modeling, and Optimization
Figure 3 for Asynchronous Multi-Model Federated Learning over Wireless Networks: Theory, Modeling, and Optimization
Figure 4 for Asynchronous Multi-Model Federated Learning over Wireless Networks: Theory, Modeling, and Optimization

Federated learning (FL) has emerged as a key technique for distributed machine learning (ML). Most literature on FL has focused on systems with (i) ML model training for a single task/model, (ii) a synchronous setting for uplink/downlink transfer of model parameters, which is often unrealistic. To address this, we develop MA-FL, which considers FL with multiple downstream tasks to be trained over an asynchronous model transmission architecture. We first characterize the convergence of ML model training under MA-FL via introducing a family of scheduling tensors to capture the scheduling of devices. Our convergence analysis sheds light on the impact of resource allocation (e.g., the mini-batch size and number of gradient descent iterations), device scheduling, and individual model states (i.e., warmed vs. cold initialization) on the performance of ML models. We then formulate a non-convex mixed integer optimization problem for jointly configuring the resource allocation and device scheduling to strike an efficient trade-off between energy consumption and ML performance, which is solved via successive convex approximations. Through numerical simulations, we reveal the advantages of MA-FL in terms of model performance and network resource savings.

* Submission to Mobihoc 2023 
Viaarxiv icon

Towards Cooperative Federated Learning over Heterogeneous Edge/Fog Networks

Mar 15, 2023
Su Wang, Seyyedali Hosseinalipour, Vaneet Aggarwal, Christopher G. Brinton, David J. Love, Weifeng Su, Mung Chiang

Figure 1 for Towards Cooperative Federated Learning over Heterogeneous Edge/Fog Networks
Figure 2 for Towards Cooperative Federated Learning over Heterogeneous Edge/Fog Networks
Figure 3 for Towards Cooperative Federated Learning over Heterogeneous Edge/Fog Networks
Figure 4 for Towards Cooperative Federated Learning over Heterogeneous Edge/Fog Networks

Federated learning (FL) has been promoted as a popular technique for training machine learning (ML) models over edge/fog networks. Traditional implementations of FL have largely neglected the potential for inter-network cooperation, treating edge/fog devices and other infrastructure participating in ML as separate processing elements. Consequently, FL has been vulnerable to several dimensions of network heterogeneity, such as varying computation capabilities, communication resources, data qualities, and privacy demands. We advocate for cooperative federated learning (CFL), a cooperative edge/fog ML paradigm built on device-to-device (D2D) and device-to-server (D2S) interactions. Through D2D and D2S cooperation, CFL counteracts network heterogeneity in edge/fog networks through enabling a model/data/resource pooling mechanism, which will yield substantial improvements in ML model training quality and network resource consumption. We propose a set of core methodologies that form the foundation of D2D and D2S cooperation and present preliminary experiments that demonstrate their benefits. We also discuss new FL functionalities enabled by this cooperative framework such as the integration of unlabeled data and heterogeneous device privacy into ML model training. Finally, we describe some open research directions at the intersection of cooperative edge/fog and FL.

* This paper has been accepted for publication in IEEE Communications Magazine 
Viaarxiv icon

Interference Cancellation GAN Framework for Dynamic Channels

Aug 17, 2022
Hung T. Nguyen, Steven Bottone, Kwang Taik Kim, Mung Chiang, H. Vincent Poor

Figure 1 for Interference Cancellation GAN Framework for Dynamic Channels
Figure 2 for Interference Cancellation GAN Framework for Dynamic Channels
Figure 3 for Interference Cancellation GAN Framework for Dynamic Channels
Figure 4 for Interference Cancellation GAN Framework for Dynamic Channels

Symbol detection is a fundamental and challenging problem in modern communication systems, e.g., multiuser multiple-input multiple-output (MIMO) setting. Iterative Soft Interference Cancellation (SIC) is a state-of-the-art method for this task and recently motivated data-driven neural network models, e.g. DeepSIC, that can deal with unknown non-linear channels. However, these neural network models require thorough timeconsuming training of the networks before applying, and is thus not readily suitable for highly dynamic channels in practice. We introduce an online training framework that can swiftly adapt to any changes in the channel. Our proposed framework unifies the recent deep unfolding approaches with the emerging generative adversarial networks (GANs) to capture any changes in the channel and quickly adjust the networks to maintain the top performance of the model. We demonstrate that our framework significantly outperforms recent neural network models on highly dynamic channels and even surpasses those on the static channel in our experiments.

Viaarxiv icon

Embedding Alignment for Unsupervised Federated Learning via Smart Data Exchange

Aug 04, 2022
Satyavrat Wagle, Seyyedali Hosseinalipour, Naji Khosravan, Mung Chiang, Christopher G. Brinton

Figure 1 for Embedding Alignment for Unsupervised Federated Learning via Smart Data Exchange
Figure 2 for Embedding Alignment for Unsupervised Federated Learning via Smart Data Exchange
Figure 3 for Embedding Alignment for Unsupervised Federated Learning via Smart Data Exchange

Federated learning (FL) has been recognized as one of the most promising solutions for distributed machine learning (ML). In most of the current literature, FL has been studied for supervised ML tasks, in which edge devices collect labeled data. Nevertheless, in many applications, it is impractical to assume existence of labeled data across devices. To this end, we develop a novel methodology, Cooperative Federated unsupervised Contrastive Learning (CF-CL), for FL across edge devices with unlabeled datasets. CF-CL employs local device cooperation where data are exchanged among devices through device-to-device (D2D) communications to avoid local model bias resulting from non-independent and identically distributed (non-i.i.d.) local datasets. CF-CL introduces a push-pull smart data sharing mechanism tailored to unsupervised FL settings, in which, each device pushes a subset of its local datapoints to its neighbors as reserved data points, and pulls a set of datapoints from its neighbors, sampled through a probabilistic importance sampling technique. We demonstrate that CF-CL leads to (i) alignment of unsupervised learned latent spaces across devices, (ii) faster global convergence, allowing for less frequent global model aggregations; and (iii) is effective in extreme non-i.i.d. data settings across the devices.

* Accepted for publication in IEEE Global Communications Conferences (GLOBECOM), 2022 
Viaarxiv icon

Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized Floating Aggregation Point

Mar 26, 2022
Bhargav Ganguly, Seyyedali Hosseinalipour, Kwang Taik Kim, Christopher G. Brinton, Vaneet Aggarwal, David J. Love, Mung Chiang

Figure 1 for Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized Floating Aggregation Point
Figure 2 for Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized Floating Aggregation Point
Figure 3 for Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized Floating Aggregation Point
Figure 4 for Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized Floating Aggregation Point

We propose cooperative edge-assisted dynamic federated learning (CE-FL). CE-FL introduces a distributed machine learning (ML) architecture, where data collection is carried out at the end devices, while the model training is conducted cooperatively at the end devices and the edge servers, enabled via data offloading from the end devices to the edge servers through base stations. CE-FL also introduces floating aggregation point, where the local models generated at the devices and the servers are aggregated at an edge server, which varies from one model training round to another to cope with the network evolution in terms of data distribution and users' mobility. CE-FL considers the heterogeneity of network elements in terms of communication/computation models and the proximity to one another. CE-FL further presumes a dynamic environment with online variation of data at the network devices which causes a drift at the ML model performance. We model the processes taken during CE-FL, and conduct analytical convergence analysis of its ML model training. We then formulate network-aware CE-FL which aims to adaptively optimize all the network elements via tuning their contribution to the learning process, which turns out to be a non-convex mixed integer problem. Motivated by the large scale of the system, we propose a distributed optimization solver to break down the computation of the solution across the network elements. We finally demonstrate the effectiveness of our framework with the data collected from a real-world testbed.

Viaarxiv icon

Contextual Model Aggregation for Fast and Robust Federated Learning in Edge Computing

Mar 23, 2022
Hung T. Nguyen, H. Vincent Poor, Mung Chiang

Figure 1 for Contextual Model Aggregation for Fast and Robust Federated Learning in Edge Computing
Figure 2 for Contextual Model Aggregation for Fast and Robust Federated Learning in Edge Computing
Figure 3 for Contextual Model Aggregation for Fast and Robust Federated Learning in Edge Computing
Figure 4 for Contextual Model Aggregation for Fast and Robust Federated Learning in Edge Computing

Federated learning is a prime candidate for distributed machine learning at the network edge due to the low communication complexity and privacy protection among other attractive properties. However, existing algorithms face issues with slow convergence and/or robustness of performance due to the considerable heterogeneity of data distribution, computation and communication capability at the edge. In this work, we tackle both of these issues by focusing on the key component of model aggregation in federated learning systems and studying optimal algorithms to perform this task. Particularly, we propose a contextual aggregation scheme that achieves the optimal context-dependent bound on loss reduction in each round of optimization. The aforementioned context-dependent bound is derived from the particular participating devices in that round and an assumption on smoothness of the overall loss function. We show that this aggregation leads to a definite reduction of loss function at every round. Furthermore, we can integrate our aggregation with many existing algorithms to obtain the contextual versions. Our experimental results demonstrate significant improvements in convergence speed and robustness of the contextual versions compared to the original algorithms. We also consider different variants of the contextual aggregation and show robust performance even in the most extreme settings.

* 10 pages 
Viaarxiv icon

Parallel Successive Learning for Dynamic Distributed Model Training over Heterogeneous Wireless Networks

Feb 12, 2022
Seyyedali Hosseinalipour, Su Wang, Nicolo Michelusi, Vaneet Aggarwal, Christopher G. Brinton, David J. Love, Mung Chiang

Figure 1 for Parallel Successive Learning for Dynamic Distributed Model Training over Heterogeneous Wireless Networks
Figure 2 for Parallel Successive Learning for Dynamic Distributed Model Training over Heterogeneous Wireless Networks
Figure 3 for Parallel Successive Learning for Dynamic Distributed Model Training over Heterogeneous Wireless Networks
Figure 4 for Parallel Successive Learning for Dynamic Distributed Model Training over Heterogeneous Wireless Networks

Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices, via iterative local updates (at devices) and global aggregations (at the server). In this paper, we develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions: (i) Network, allowing decentralized cooperation among the devices via device-to-device (D2D) communications. (ii) Heterogeneity, interpreted at three levels: (ii-a) Learning: PSL considers heterogeneous number of stochastic gradient descent iterations with different mini-batch sizes at the devices; (ii-b) Data: PSL presumes a dynamic environment with data arrival and departure, where the distributions of local datasets evolve over time, captured via a new metric for model/concept drift. (ii-c) Device: PSL considers devices with different computation and communication capabilities. (iii) Proximity, where devices have different distances to each other and the access point. PSL considers the realistic scenario where global aggregations are conducted with idle times in-between them for resource efficiency improvements, and incorporates data dispersion and model dispersion with local model condensation into FedL. Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning. We then propose network-aware dynamic model tracking to optimize the model learning vs. resource efficiency tradeoff, which we show is an NP-hard signomial programming problem. We finally solve this problem through proposing a general optimization solver. Our numerical results reveal new findings on the interdependencies between the idle times in-between the global aggregations, model/concept drift, and D2D cooperation configuration.

Viaarxiv icon

Adversarial Neural Networks for Error Correcting Codes

Dec 21, 2021
Hung T. Nguyen, Steven Bottone, Kwang Taik Kim, Mung Chiang, H. Vincent Poor

Figure 1 for Adversarial Neural Networks for Error Correcting Codes
Figure 2 for Adversarial Neural Networks for Error Correcting Codes
Figure 3 for Adversarial Neural Networks for Error Correcting Codes

Error correcting codes are a fundamental component in modern day communication systems, demanding extremely high throughput, ultra-reliability and low latency. Recent approaches using machine learning (ML) models as the decoders offer both improved performance and great adaptability to unknown environments, where traditional decoders struggle. We introduce a general framework to further boost the performance and applicability of ML models. We propose to combine ML decoders with a competing discriminator network that tries to distinguish between codewords and noisy words, and, hence, guides the decoding models to recover transmitted codewords. Our framework is game-theoretic, motivated by generative adversarial networks (GANs), with the decoder and discriminator competing in a zero-sum game. The decoder learns to simultaneously decode and generate codewords while the discriminator learns to tell the differences between decoded outputs and codewords. Thus, the decoder is able to decode noisy received signals into codewords, increasing the probability of successful decoding. We show a strong connection of our framework with the optimal maximum likelihood decoder by proving that this decoder defines a Nash equilibrium point of our game. Hence, training to equilibrium has a good possibility of achieving the optimal maximum likelihood performance. Moreover, our framework does not require training labels, which are typically unavailable during communications, and, thus, seemingly can be trained online and adapt to channel dynamics. To demonstrate the performance of our framework, we combine it with the very recent neural decoders and show improved performance compared to the original models and traditional decoding algorithms on various codes.

* 6 pages, accepted to GLOBECOM 2021 
Viaarxiv icon

On-the-fly Resource-Aware Model Aggregation for Federated Learning in Heterogeneous Edge

Dec 21, 2021
Hung T. Nguyen, Roberto Morabito, Kwang Taik Kim, Mung Chiang

Figure 1 for On-the-fly Resource-Aware Model Aggregation for Federated Learning in Heterogeneous Edge
Figure 2 for On-the-fly Resource-Aware Model Aggregation for Federated Learning in Heterogeneous Edge
Figure 3 for On-the-fly Resource-Aware Model Aggregation for Federated Learning in Heterogeneous Edge
Figure 4 for On-the-fly Resource-Aware Model Aggregation for Federated Learning in Heterogeneous Edge

Edge computing has revolutionized the world of mobile and wireless networks world thanks to its flexible, secure, and performing characteristics. Lately, we have witnessed the increasing use of it to make more performing the deployment of machine learning (ML) techniques such as federated learning (FL). FL was debuted to improve communication efficiency compared to conventional distributed machine learning (ML). The original FL assumes a central aggregation server to aggregate locally optimized parameters and might bring reliability and latency issues. In this paper, we conduct an in-depth study of strategies to replace this central server by a flying master that is dynamically selected based on the current participants and/or available resources at every FL round of optimization. Specifically, we compare different metrics to select this flying master and assess consensus algorithms to perform the selection. Our results demonstrate a significant reduction of runtime using our flying master FL framework compared to the original FL from measurements results conducted in our EdgeAI testbed and over real 5G networks using an operational edge testbed.

* 6 pages, accepted to GLOBECOM 2021 
Viaarxiv icon