Federated learning (FL) is a privacy-preserving collaborative learning framework, and differential privacy can be applied to further enhance its privacy protection. Existing FL systems typically adopt Federated Average (FedAvg) as the training algorithm and implement differential privacy with a Gaussian mechanism. However, the inherent privacy-utility trade-off in these systems severely degrades the training performance if a tight privacy budget is enforced. Besides, the Gaussian mechanism requires model weights to be of high-precision. To improve communication efficiency and achieve a better privacy-utility trade-off, we propose a communication-efficient FL training algorithm with differential privacy guarantee. Specifically, we propose to adopt binary neural networks (BNNs) and introduce discrete noise in the FL setting. Binary model parameters are uploaded for higher communication efficiency and discrete noise is added to achieve the client-level differential privacy protection. The achieved performance guarantee is rigorously proved, and it is shown to depend on the level of discrete noise. Experimental results based on MNIST and Fashion-MNIST datasets will demonstrate that the proposed training algorithm achieves client-level privacy protection with performance gain while enjoying the benefits of low communication overhead from binary model updates.
Extremely large-scale multiple-input-multiple-output (XL-MIMO), which offers vast spatial degrees of freedom, has emerged as a potentially pivotal enabling technology for the sixth generation (6G) of wireless mobile networks. With its growing significance, both opportunities and challenges are concurrently manifesting. This paper presents a comprehensive survey of research on XL-MIMO wireless systems. In particular, we introduce four XL-MIMO hardware architectures: uniform linear array (ULA)-based XL-MIMO, uniform planar array (UPA)-based XL-MIMO utilizing either patch antennas or point antennas, and continuous aperture (CAP)-based XL-MIMO. We comprehensively analyze and discuss their characteristics and interrelationships. Following this, we examine exact and approximate near-field channel models for XL-MIMO. Given the distinct electromagnetic properties of near-field communications, we present a range of channel models to demonstrate the benefits of XL-MIMO. We further motivate and discuss low-complexity signal processing schemes to promote the practical implementation of XL-MIMO. Furthermore, we explore the interplay between XL-MIMO and other emergent 6G technologies. Finally, we outline several compelling research directions for future XL-MIMO wireless communication systems.
The evolution of wireless networks gravitates towards connected intelligence, a concept that envisions seamless interconnectivity among humans, objects, and intelligence in a hyper-connected cyber-physical world. Edge AI emerges as a promising solution to achieve connected intelligence by delivering high-quality, low-latency, and privacy-preserving AI services at the network edge. In this article, we introduce an autonomous edge AI system that automatically organizes, adapts, and optimizes itself to meet users' diverse requirements. The system employs a cloud-edge-client hierarchical architecture, where the large language model, i.e., Generative Pretrained Transformer (GPT), resides in the cloud, and other AI models are co-deployed on devices and edge servers. By leveraging the powerful abilities of GPT in language understanding, planning, and code generation, we present a versatile framework that efficiently coordinates edge AI models to cater to users' personal demands while automatically generating code to train new models via edge federated learning. Experimental results demonstrate the system's remarkable ability to accurately comprehend user demands, efficiently execute AI models with minimal cost, and effectively create high-performance AI models through federated learning.
Task-oriented communication is an emerging paradigm for next-generation communication networks, which extracts and transmits task-relevant information, instead of raw data, for downstream applications. Most existing deep learning (DL)-based task-oriented communication systems adopt a closed-world scenario, assuming either the same data distribution for training and testing, or the system could have access to a large out-of-distribution (OoD) dataset for retraining. However, in practical open-world scenarios, task-oriented communication systems need to handle unknown OoD data. Under such circumstances, the powerful approximation ability of learning methods may force the task-oriented communication systems to overfit the training data (i.e., in-distribution data) and provide overconfident judgments when encountering OoD data. Based on the information bottleneck (IB) framework, we propose a class conditional IB (CCIB) approach to address this problem in this paper, supported by information-theoretical insights. The idea is to extract distinguishable features from in-distribution data while keeping their compactness and informativeness. This is achieved by imposing the class conditional latent prior distribution and enforcing the latent of different classes to be far away from each other. Simulation results shall demonstrate that the proposed approach detects OoD data more efficiently than the baselines and state-of-the-art approaches, without compromising the rate-distortion tradeoff.
Federated Learning (FL) is a promising distributed learning mechanism which still faces two major challenges, namely privacy breaches and system efficiency. In this work, we reconceptualize the FL system from the perspective of network information theory, and formulate an original FL communication framework, FedNC, which is inspired by Network Coding (NC). The main idea of FedNC is mixing the information of the local models by making random linear combinations of the original packets, before uploading for further aggregation. Due to the benefits of the coding scheme, both theoretical and experimental analysis indicate that FedNC improves the performance of traditional FL in several important ways, including security, throughput, and robustness. To the best of our knowledge, this is the first framework where NC is introduced in FL. As FL continues to evolve within practical network frameworks, more applications and variants can be further designed based on FedNC.
Federated learning (FL) has prevailed as an efficient and privacy-preserved scheme for distributed learning. In this work, we mainly focus on the optimization of computation and communication in FL from a view of pruning. By adopting layer-wise pruning in local training and federated updating, we formulate an explicit FL pruning framework, FedLP (Federated Layer-wise Pruning), which is model-agnostic and universal for different types of deep learning models. Two specific schemes of FedLP are designed for scenarios with homogeneous local models and heterogeneous ones. Both theoretical and experimental evaluations are developed to verify that FedLP relieves the system bottlenecks of communication and computation with marginal performance decay. To the best of our knowledge, FedLP is the first framework that formally introduces the layer-wise pruning into FL. Within the scope of federated learning, more variants and combinations can be further designed based on FedLP.
As one of the core technologies for 5G systems, massive multiple-input multiple-output (MIMO) introduces dramatic capacity improvements along with very high beamforming and spatial multiplexing gains. When developing efficient physical layer algorithms for massive MIMO systems, message passing is one promising candidate owing to the superior performance. However, as their computational complexity increases dramatically with the problem size, the state-of-the-art message passing algorithms cannot be directly applied to future 6G systems, where an exceedingly large number of antennas are expected to be deployed. To address this issue, we propose a model-driven deep learning (DL) framework, namely the AMP-GNN for massive MIMO transceiver design, by considering the low complexity of the AMP algorithm and adaptability of GNNs. Specifically, the structure of the AMP-GNN network is customized by unfolding the approximate message passing (AMP) algorithm and introducing a graph neural network (GNN) module into it. The permutation equivariance property of AMP-GNN is proved, which enables the AMP-GNN to learn more efficiently and to adapt to different numbers of users. We also reveal the underlying reason why GNNs improve the AMP algorithm from the perspective of expectation propagation, which motivates us to amalgamate various GNNs with different message passing algorithms. In the simulation, we take the massive MIMO detection to exemplify that the proposed AMP-GNN significantly improves the performance of the AMP detector, achieves comparable performance as the state-of-the-art DL-based MIMO detectors, and presents strong robustness to various mismatches.
Terahertz ultra-massive MIMO (THz UM-MIMO) is envisioned as one of the key enablers of 6G wireless networks, for which channel estimation is highly challenging. Traditional analytical estimation methods are no longer effective, as the enlarged array aperture and the small wavelength result in a mixture of far-field and near-field paths, constituting a hybrid-field channel. Deep learning (DL)-based methods, despite the competitive performance, generally lack theoretical guarantees and scale poorly with the size of the array. In this paper, we propose a general DL framework for THz UM-MIMO channel estimation, which leverages existing iterative channel estimators and is with provable guarantees. Each iteration is implemented by a fixed point network (FPN), consisting of a closed-form linear estimator and a DL-based non-linear estimator. The proposed method perfectly matches the THz UM-MIMO channel estimation due to several unique advantages. First, the complexity is low and adaptive. It enjoys provable linear convergence with a low per-iteration cost and monotonically increasing accuracy, which enables an adaptive accuracy-complexity tradeoff. Second, it is robust to practical distribution shifts and can directly generalize to a variety of heavily out-of-distribution scenarios with almost no performance loss, which is suitable for the complicated THz channel conditions. Theoretical analysis and extensive simulation results are provided to illustrate the advantages over the state-of-the-art methods in estimation accuracy, convergence rate, complexity, and robustness.
In frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems, downlink channel state information (CSI) needs to be sent from users back to the base station (BS), which causes prohibitive feedback overhead. In this paper, we propose a lightweight and adaptive deep learning-based CSI feedback scheme by capitalizing on deep equilibrium models. Different from existing deep learning-based approaches that stack multiple explicit layers, we propose an implicit equilibrium block to mimic the process of an infinite-depth neural network. In particular, the implicit equilibrium block is defined by a fixed-point iteration and the trainable parameters in each iteration are shared, which results in a lightweight model. Furthermore, the number of forward iterations can be adjusted according to the users' computational capability, achieving an online accuracy-efficiency trade-off. Simulation results will show that the proposed method obtains a comparable performance as the existing benchmarks but with much-reduced complexity and permits an accuracy-efficiency trade-off at runtime.
Reliability is of paramount importance for the physical layer of wireless systems due to its decisive impact on end-to-end performance. However, the uncertainty of prevailing deep learning (DL)-based physical layer algorithms is hard to quantify due to the black-box nature of neural networks. This limitation is a major obstacle that hinders their practical deployment. In this paper, we attempt to quantify the uncertainty of an important category of DL-based channel estimators. An efficient statistical method is proposed to make blind predictions for the mean squared error of the DL-estimated channel solely based on received pilots, without knowledge of the ground-truth channel, the prior distribution of the channel, or the noise statistics. The complexity of the blind performance prediction is low and scales only linearly with the number of antennas. Simulation results for ultra-massive multiple-input multiple-output (UM-MIMO) channel estimation with a mixture of far-field and near-field paths are provided to verify the accuracy and efficiency of the proposed method.