Abstract:Transformer-based large language models (LLMs) have achieved remarkable success across various tasks. Yet, fine-tuning such massive models in federated learning (FL) settings poses significant challenges due to resource constraints and communication overhead. Low-Rank Adaptation (LoRA) addresses these issues by training compact, low-rank matrices instead of fully fine-tuning large models. This paper introduces a wireless federated LoRA fine-tuning framework that optimizes both learning performance and communication efficiency. We provide a novel convergence analysis, revealing how LoRA rank and covariance effects influence FL training dynamics. Leveraging these insights, we propose Sparsified Orthogonal Fine-Tuning (\textbf{SOFT}), an adaptive sparsification method that streamlines parameter updates without expensive matrix multiplications and singular value decomposition (SVD) operations. Additionally, we present a Two Stage Federated Algorithm (\textbf{TSFA}) algorithm that pre-determines key parameters offline and dynamically adjusts bandwidth and sparsification online, ensuring efficient training under latency constraints. Experiments on benchmark datasets show that our approach achieves accuracy comparable to ideal scenario models while significantly reducing communication overhead. Our framework thus enables scalable, resource-efficient deployment of large models in real-world wireless FL scenarios.
Abstract:This paper introduces a novel privacy-enhanced over-the-air Federated Learning (OTA-FL) framework using client-driven power balancing (CDPB) to address privacy concerns in OTA-FL systems. In recent studies, a server determines the power balancing based on the continuous transmission of channel state information (CSI) from each client. Furthermore, they concentrate on fulfilling privacy requirements in every global iteration, which can heighten the risk of privacy exposure as the learning process extends. To mitigate these risks, we propose two CDPB strategies -- CDPB-n (noisy) and CDPB-i (idle) -- allowing clients to adjust transmission power independently, without sharing CSI. CDPB-n transmits noise during poor conditions, while CDPB-i pauses transmission until conditions improve. To further enhance privacy and learning efficiency, we show a mixed strategy, CDPB-mixed, which combines CDPB-n and CDPB-i. Our experimental results show that CDPB outperforms traditional approaches in terms of model accuracy and privacy guarantees, providing a practical solution for enhancing OTA-FL in resource-constrained environments.