Future wireless networks are envisioned to simultaneously provide high data-rate communication and ubiquitous environment-aware services for numerous users. One promising approach to meet this demand is to employ network-level integrated sensing and communications (ISAC) by jointly designing the signal processing and resource allocation over the entire network. However, to unleash the full potential of network-level ISAC, some critical challenges must be tackled. Among them, interference management is one of the most significant ones. In this article, we build up a bridge between interference mitigation techniques and the corresponding optimization methods, which facilitates efficient interference mitigation in network-level ISAC systems. In particular, we first identify several types of interference in network-level ISAC systems, including self-interference, mutual interference, crosstalk, clutter, and multiuser interference. Then, we present several promising techniques that can be utilized to suppress specific types of interference. For each type of interference, we discuss the corresponding problem formulation and identify the associated optimization methods. Moreover, to illustrate the effectiveness of the proposed interference mitigation techniques, two concrete network-level ISAC systems, namely coordinated cellular network-based and distributed antenna-based ISAC systems, are investigated from interference management perspective. Experiment results indicate that it is beneficial to collaboratively employ different interference mitigation techniques and leverage the network structure to achieve the full potential of network-level ISAC. Finally, we highlight several promising future research directions for the design of ISAC systems.
The next-generation (6G) wireless networks are expected to provide not only seamless and high data-rate communications, but also ubiquitous sensing services. By providing vast spatial degrees of freedom (DoFs), ultra-massive multiple-input multiple-output (UM-MIMO) technology is a key enabler for both sensing and communications in 6G. However, the adoption of UM-MIMO leads to a shift from the far field to the near field in terms of the electromagnetic propagation, which poses novel challenges in system design. Specifically, near-field effects introduce highly non-linear spherical wave models that render existing designs based on plane wave assumptions ineffective. In this paper, we focus on two crucial tasks in sensing and communications, respectively, i.e., localization and channel estimation, and investigate their joint design by exploring the near-field propagation characteristics, achieving mutual benefits between two tasks. In addition, multiple base stations (BSs) are leveraged to collaboratively facilitate a cooperative localization framework. To address the joint channel estimation and cooperative localization problem for near-field UM-MIMO systems, we propose a variational Newtonized near-field channel estimation (VNNCE) algorithm and a Gaussian fusion cooperative localization (GFCL) algorithm. The VNNCE algorithm exploits the spatial DoFs provided by the near-field channel to obtain position-related soft information, while the GFCL algorithm fuses this soft information to achieve more accurate localization. Additionally, we introduce a joint architecture that seamlessly integrates channel estimation and cooperative localization.
Holographic MIMO (HMIMO) is being increasingly recognized as a key enabling technology for 6G wireless systems through the deployment of an extremely large number of antennas within a compact space to fully exploit the potentials of the electromagnetic (EM) channel. Nevertheless, the benefits of HMIMO systems cannot be fully unleashed without an efficient means to estimate the high-dimensional channel, whose distribution becomes increasingly complicated due to the accessibility of the near-field region. In this paper, we address the fundamental challenge of designing a low-complexity Bayes-optimal channel estimator in near-field HMIMO systems operating in unknown EM environments. The core idea is to estimate the HMIMO channels solely based on the Stein's score function of the received pilot signals and an estimated noise level, without relying on priors or supervision that is not feasible in practical deployment. A neural network is trained with the unsupervised denoising score matching objective to learn the parameterized score function. Meanwhile, a principal component analysis (PCA)-based algorithm is proposed to estimate the noise level leveraging the low-rank near-field spatial correlation. Building upon these techniques, we develop a Bayes-optimal score-based channel estimator for fully-digital HMIMO transceivers in a closed form. The optimal score-based estimator is also extended to hybrid analog-digital HMIMO systems by incorporating it into a low-complexity message passing algorithm. The (quasi-) Bayes-optimality of the proposed estimators is validated both in theory and by extensive simulation results. In addition to optimality, it is shown that our proposal is robust to various mismatches and can quickly adapt to dynamic EM environments in an online manner thanks to its unsupervised nature, demonstrating its potential in real-world deployment.
* 13 pages, 6 figures, 2 tables, submitted to IEEE journal. arXiv admin
note: text overlap with arXiv:2311.07908
Artificial intelligence (AI) technologies have emerged as pivotal enablers across a multitude of industries, including consumer electronics, healthcare, and manufacturing, largely due to their resurgence over the past decade. The transformative power of AI is primarily derived from the utilization of deep neural networks (DNNs), which require extensive data for training and substantial computational resources for processing. Consequently, DNN models are typically trained and deployed on resource-rich cloud servers. However, due to potential latency issues associated with cloud communications, deep learning (DL) workflows are increasingly being transitioned to wireless edge networks near end-user devices (EUDs). This shift is designed to support latency-sensitive applications and has given rise to a new paradigm of edge AI, which will play a critical role in upcoming 6G networks to support ubiquitous AI applications. Despite its potential, edge AI faces substantial challenges, mostly due to the dichotomy between the resource limitations of wireless edge networks and the resource-intensive nature of DL. Specifically, the acquisition of large-scale data, as well as the training and inference processes of DNNs, can rapidly deplete the battery energy of EUDs. This necessitates an energy-conscious approach to edge AI to ensure both optimal and sustainable performance. In this paper, we present a contemporary survey on green edge AI. We commence by analyzing the principal energy consumption components of edge AI systems to identify the fundamental design principles of green edge AI. Guided by these principles, we then explore energy-efficient design methodologies for the three critical tasks in edge AI systems, including training data acquisition, edge training, and edge inference. Finally, we underscore potential future research directions to further enhance the energy efficiency of edge AI.
* 26 pages, 7 figures, 5 tables, submitted to IEEE for possible
Holographic MIMO (HMIMO) has recently been recognized as a promising enabler for future 6G systems through the use of an ultra-massive number of antennas in a compact space to exploit the propagation characteristics of the electromagnetic (EM) channel. Nevertheless, the promised gain of HMIMO could not be fully unleashed without an efficient means to estimate the high-dimensional channel. Bayes-optimal estimators typically necessitate either a large volume of supervised training samples or a priori knowledge of the true channel distribution, which could hardly be available in practice due to the enormous system scale and the complicated EM environments. It is thus important to design a Bayes-optimal estimator for the HMIMO channels in arbitrary and unknown EM environments, free of any supervision or priors. This work proposes a self-supervised minimum mean-square-error (MMSE) channel estimation algorithm based on powerful machine learning tools, i.e., score matching and principal component analysis. The training stage requires only the pilot signals, without knowing the spatial correlation, the ground-truth channels, or the received signal-to-noise-ratio. Simulation results will show that, even being totally self-supervised, the proposed algorithm can still approach the performance of the oracle MMSE method with an extremely low complexity, making it a competitive candidate in practice.
* 6 pages, 3 figures, 1 table, submitted to IEEE for possible
Future sixth-generation (6G) systems are expected to leverage extremely large-scale multiple-input multiple-output (XL-MIMO) technology, which significantly expands the range of the near-field region. While accurate channel estimation is essential for beamforming and data detection, the unique characteristics of near-field channels pose additional challenges to the effective acquisition of channel state information. In this paper, we propose a novel codebook design, which allows efficient near-field channel estimation with significantly reduced codebook size. Specifically, we consider the eigen-problem based on the near-field electromagnetic wave transmission model. Moreover, we derive the general form of the eigenvectors associated with the near-field channel matrix, revealing their noteworthy connection to the discrete prolate spheroidal sequence (DPSS). Based on the proposed near-field codebook design, we further introduce a two-step channel estimation scheme. Simulation results demonstrate that the proposed codebook design not only achieves superior sparsification performance of near-field channels with a lower leakage effect, but also significantly improves the accuracy in compressive sensing channel estimation.
As one of the core technologies for 5G systems, massive multiple-input multiple-output (MIMO) introduces dramatic capacity improvements along with very high beamforming and spatial multiplexing gains. When developing efficient physical layer algorithms for massive MIMO systems, message passing is one promising candidate owing to the superior performance. However, as their computational complexity increases dramatically with the problem size, the state-of-the-art message passing algorithms cannot be directly applied to future 6G systems, where an exceedingly large number of antennas are expected to be deployed. To address this issue, we propose a model-driven deep learning (DL) framework, namely the AMP-GNN for massive MIMO transceiver design, by considering the low complexity of the AMP algorithm and adaptability of GNNs. Specifically, the structure of the AMP-GNN network is customized by unfolding the approximate message passing (AMP) algorithm and introducing a graph neural network (GNN) module into it. The permutation equivariance property of AMP-GNN is proved, which enables the AMP-GNN to learn more efficiently and to adapt to different numbers of users. We also reveal the underlying reason why GNNs improve the AMP algorithm from the perspective of expectation propagation, which motivates us to amalgamate various GNNs with different message passing algorithms. In the simulation, we take the massive MIMO detection to exemplify that the proposed AMP-GNN significantly improves the performance of the AMP detector, achieves comparable performance as the state-of-the-art DL-based MIMO detectors, and presents strong robustness to various mismatches.
* 30 Pages, 7 Figures, and 4 Tables. This paper has been submitted to
the IEEE for possible publication. arXiv admin note: text overlap with
Terahertz ultra-massive MIMO (THz UM-MIMO) is envisioned as one of the key enablers of 6G wireless networks, for which channel estimation is highly challenging. Traditional analytical estimation methods are no longer effective, as the enlarged array aperture and the small wavelength result in a mixture of far-field and near-field paths, constituting a hybrid-field channel. Deep learning (DL)-based methods, despite the competitive performance, generally lack theoretical guarantees and scale poorly with the size of the array. In this paper, we propose a general DL framework for THz UM-MIMO channel estimation, which leverages existing iterative channel estimators and is with provable guarantees. Each iteration is implemented by a fixed point network (FPN), consisting of a closed-form linear estimator and a DL-based non-linear estimator. The proposed method perfectly matches the THz UM-MIMO channel estimation due to several unique advantages. First, the complexity is low and adaptive. It enjoys provable linear convergence with a low per-iteration cost and monotonically increasing accuracy, which enables an adaptive accuracy-complexity tradeoff. Second, it is robust to practical distribution shifts and can directly generalize to a variety of heavily out-of-distribution scenarios with almost no performance loss, which is suitable for the complicated THz channel conditions. Theoretical analysis and extensive simulation results are provided to illustrate the advantages over the state-of-the-art methods in estimation accuracy, convergence rate, complexity, and robustness.
In frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems, downlink channel state information (CSI) needs to be sent from users back to the base station (BS), which causes prohibitive feedback overhead. In this paper, we propose a lightweight and adaptive deep learning-based CSI feedback scheme by capitalizing on deep equilibrium models. Different from existing deep learning-based approaches that stack multiple explicit layers, we propose an implicit equilibrium block to mimic the process of an infinite-depth neural network. In particular, the implicit equilibrium block is defined by a fixed-point iteration and the trainable parameters in each iteration are shared, which results in a lightweight model. Furthermore, the number of forward iterations can be adjusted according to the users' computational capability, achieving an online accuracy-efficiency trade-off. Simulation results will show that the proposed method obtains a comparable performance as the existing benchmarks but with much-reduced complexity and permits an accuracy-efficiency trade-off at runtime.
Reliability is of paramount importance for the physical layer of wireless systems due to its decisive impact on end-to-end performance. However, the uncertainty of prevailing deep learning (DL)-based physical layer algorithms is hard to quantify due to the black-box nature of neural networks. This limitation is a major obstacle that hinders their practical deployment. In this paper, we attempt to quantify the uncertainty of an important category of DL-based channel estimators. An efficient statistical method is proposed to make blind predictions for the mean squared error of the DL-estimated channel solely based on received pilots, without knowledge of the ground-truth channel, the prior distribution of the channel, or the noise statistics. The complexity of the blind performance prediction is low and scales only linearly with the number of antennas. Simulation results for ultra-massive multiple-input multiple-output (UM-MIMO) channel estimation with a mixture of far-field and near-field paths are provided to verify the accuracy and efficiency of the proposed method.
* 6 pages, 3 figures, 1 table, submitted to IEEE for possible