Abstract:A sophisticated hybrid quantum convolutional neural network (HQCNN) is conceived for handling the pilot assignment task in cell-free massive MIMO systems, while maximizing the total ergodic sum throughput. The existing model-based solutions found in the literature are inefficient and/or computationally demanding. Similarly, conventional deep neural networks may struggle in the face of high-dimensional inputs, require complex architectures, and their convergence is slow due to training numerous hyperparameters. The proposed HQCNN leverages parameterized quantum circuits (PQCs) relying on superposition for enhanced feature extraction. Specifically, we exploit the same PQC across all the convolutional layers for customizing the neural network and for accelerating the convergence. Our numerical results demonstrate that the proposed HQCNN offers a total network throughput close to that of the excessive-complexity exhaustive search and outperforms the state-of-the-art benchmarks.
Abstract:Domain Generalization (DG) aims to learn from multiple known source domains a model that can generalize well to unknown target domains. One of the key approaches in DG is training an encoder which generates domain-invariant representations. However, this approach is not applicable in Federated Domain Generalization (FDG), where data from various domains are distributed across different clients. In this paper, we introduce a novel approach, dubbed Federated Learning via On-server Matching Gradient (FedOMG), which can \emph{efficiently leverage domain information from distributed domains}. Specifically, we utilize the local gradients as information about the distributed models to find an invariant gradient direction across all domains through gradient inner product maximization. The advantages are two-fold: 1) FedOMG can aggregate the characteristics of distributed models on the centralized server without incurring any additional communication cost, and 2) FedOMG is orthogonal to many existing FL/FDG methods, allowing for additional performance improvements by being seamlessly integrated with them. Extensive experimental evaluations on various settings to demonstrate the robustness of FedOMG compared to other FL/FDG baselines. Our method outperforms recent SOTA baselines on four FL benchmark datasets (MNIST, EMNIST, CIFAR-10, and CIFAR-100), and three FDG benchmark datasets (PACS, VLCS, and OfficeHome).