Alert button
Picture for Jonathan Scott

Jonathan Scott

Alert button

PeFLL: A Lifelong Learning Approach to Personalized Federated Learning

Jun 08, 2023
Jonathan Scott, Hossein Zakerinia, Christoph H. Lampert

Figure 1 for PeFLL: A Lifelong Learning Approach to Personalized Federated Learning
Figure 2 for PeFLL: A Lifelong Learning Approach to Personalized Federated Learning
Figure 3 for PeFLL: A Lifelong Learning Approach to Personalized Federated Learning
Figure 4 for PeFLL: A Lifelong Learning Approach to Personalized Federated Learning

Personalized federated learning (pFL) has emerged as a popular approach to dealing with the challenge of statistical heterogeneity between the data distributions of the participating clients. Instead of learning a single global model, pFL aims to learn an individual model for each client while still making use of the data available at other clients. In this work, we present PeFLL, a new pFL approach rooted in lifelong learning that performs well not only on clients present during its training phase, but also on any that may emerge in the future. PeFLL learns to output client specific models by jointly training an embedding network and a hypernetwork. The embedding network learns to represent clients in a latent descriptor space in a way that reflects their similarity to each other. The hypernetwork learns a mapping from this latent space to the space of possible client models. We demonstrate experimentally that PeFLL produces models of superior accuracy compared to previous methods, especially for clients not seen during training, and that it scales well to large numbers of clients. Moreover, generating a personalized model for a new client is efficient as no additional fine-tuning or optimization is required by either the client or the server. We also present theoretical results supporting PeFLL in the form of a new PAC-Bayesian generalization bound for lifelong learning and we prove the convergence of our proposed optimization procedure.

Viaarxiv icon

FedProp: Cross-client Label Propagation for Federated Semi-supervised Learning

Oct 12, 2022
Jonathan Scott, Michelle Yeo, Christoph H. Lampert

Figure 1 for FedProp: Cross-client Label Propagation for Federated Semi-supervised Learning
Figure 2 for FedProp: Cross-client Label Propagation for Federated Semi-supervised Learning
Figure 3 for FedProp: Cross-client Label Propagation for Federated Semi-supervised Learning

Federated learning (FL) allows multiple clients to jointly train a machine learning model in such a way that no client has to share their data with any other participating party. In the supervised setting, where all client data is fully labeled, FL has been widely adopted for learning tasks that require data privacy. However, it is an ongoing research question how to best perform federated learning in a semi-supervised setting, where the clients possess data that is only partially labeled or even completely unlabeled. In this work, we propose a new method, FedProp, that follows a manifold-based approach to semi-supervised learning (SSL). It estimates the data manifold jointly from the data of multiple clients and computes pseudo-labels using cross-client label propagation. To avoid that clients have to share their data with anyone, FedProp employs two cryptographically secure yet highly efficient protocols: secure Hamming distance computation and secure summation. Experiments on three standard benchmarks show that FedProp achieves higher classification accuracy than previous federated SSL methods. Furthermore, as a pseudolabel-based technique, FedProp is complementary to other federated SSL approaches, in particular consistency-based ones. We demonstrate experimentally that further accuracy gains are possible by combining both.

Viaarxiv icon