Alert button
Picture for Ravikumar Balakrishnan

Ravikumar Balakrishnan

Alert button

Multi-Task Model Personalization for Federated Supervised SVM in Heterogeneous Networks

Apr 01, 2023
Aleksei Ponomarenko-Timofeev, Olga Galinina, Ravikumar Balakrishnan, Nageen Himayat, Sergey Andreev, Yevgeni Koucheryavy

Figure 1 for Multi-Task Model Personalization for Federated Supervised SVM in Heterogeneous Networks
Figure 2 for Multi-Task Model Personalization for Federated Supervised SVM in Heterogeneous Networks
Figure 3 for Multi-Task Model Personalization for Federated Supervised SVM in Heterogeneous Networks
Figure 4 for Multi-Task Model Personalization for Federated Supervised SVM in Heterogeneous Networks

Federated systems enable collaborative training on highly heterogeneous data through model personalization, which can be facilitated by employing multi-task learning algorithms. However, significant variation in device computing capabilities may result in substantial degradation in the convergence rate of training. To accelerate the learning procedure for diverse participants in a multi-task federated setting, more efficient and robust methods need to be developed. In this paper, we design an efficient iterative distributed method based on the alternating direction method of multipliers (ADMM) for support vector machines (SVMs), which tackles federated classification and regression. The proposed method utilizes efficient computations and model exchange in a network of heterogeneous nodes and allows personalization of the learning model in the presence of non-i.i.d. data. To further enhance privacy, we introduce a random mask procedure that helps avoid data inversion. Finally, we analyze the impact of the proposed privacy mechanisms and participant hardware and data heterogeneity on the system performance.

* 14 pages, 12 figures, 4 tables, 1 algorithm; Added algorithm for iterative solution, updated the abstract, fixed typos 
Viaarxiv icon

Sim-to-Real Transfer in Multi-agent Reinforcement Networking for Federated Edge Computing

Oct 18, 2021
Pinyarash Pinyoanuntapong, Tagore Pothuneedi, Ravikumar Balakrishnan, Minwoo Lee, Chen Chen, Pu Wang

Figure 1 for Sim-to-Real Transfer in Multi-agent Reinforcement Networking for Federated Edge Computing
Figure 2 for Sim-to-Real Transfer in Multi-agent Reinforcement Networking for Federated Edge Computing
Figure 3 for Sim-to-Real Transfer in Multi-agent Reinforcement Networking for Federated Edge Computing
Figure 4 for Sim-to-Real Transfer in Multi-agent Reinforcement Networking for Federated Edge Computing

Federated Learning (FL) over wireless multi-hop edge computing networks, i.e., multi-hop FL, is a cost-effective distributed on-device deep learning paradigm. This paper presents FedEdge simulator, a high-fidelity Linux-based simulator, which enables fast prototyping, sim-to-real code, and knowledge transfer for multi-hop FL systems. FedEdge simulator is built on top of the hardware-oriented FedEdge experimental framework with a new extension of the realistic physical layer emulator. This emulator exploits trace-based channel modeling and dynamic link scheduling to minimize the reality gap between the simulator and the physical testbed. Our initial experiments demonstrate the high fidelity of the FedEdge simulator and its superior performance on sim-to-real knowledge transfer in reinforcement learning-optimized multi-hop FL.

* 5 pages 
Viaarxiv icon

MutualNet: Adaptive ConvNet via Mutual Learning from Different Model Configurations

May 14, 2021
Taojiannan Yang, Sijie Zhu, Matias Mendieta, Pu Wang, Ravikumar Balakrishnan, Minwoo Lee, Tao Han, Mubarak Shah, Chen Chen

Figure 1 for MutualNet: Adaptive ConvNet via Mutual Learning from Different Model Configurations
Figure 2 for MutualNet: Adaptive ConvNet via Mutual Learning from Different Model Configurations
Figure 3 for MutualNet: Adaptive ConvNet via Mutual Learning from Different Model Configurations
Figure 4 for MutualNet: Adaptive ConvNet via Mutual Learning from Different Model Configurations

Most existing deep neural networks are static, which means they can only do inference at a fixed complexity. But the resource budget can vary substantially across different devices. Even on a single device, the affordable budget can change with different scenarios, and repeatedly training networks for each required budget would be incredibly expensive. Therefore, in this work, we propose a general method called MutualNet to train a single network that can run at a diverse set of resource constraints. Our method trains a cohort of model configurations with various network widths and input resolutions. This mutual learning scheme not only allows the model to run at different width-resolution configurations but also transfers the unique knowledge among these configurations, helping the model to learn stronger representations overall. MutualNet is a general training methodology that can be applied to various network structures (e.g., 2D networks: MobileNets, ResNet, 3D networks: SlowFast, X3D) and various tasks (e.g., image classification, object detection, segmentation, and action recognition), and is demonstrated to achieve consistent improvements on a variety of datasets. Since we only train the model once, it also greatly reduces the training cost compared to independently training several models. Surprisingly, MutualNet can also be used to significantly boost the performance of a single network, if dynamic resource constraint is not a concern. In summary, MutualNet is a unified method for both static and adaptive, 2D and 3D networks. Codes and pre-trained models are available at \url{https://github.com/taoyang1122/MutualNet}.

* Extended version of arXiv:1909.12978. More experiments on 3D networks (SlowFast, X3D) and analyses on training cost 
Viaarxiv icon