Within the realm of rapidly advancing wireless sensor networks (WSNs), distributed detection assumes a significant role in various practical applications. However, critical challenge lies in maintaining robust detection performance while operating within the constraints of limited bandwidth and energy resources. This paper introduces a novel approach that combines model-driven deep learning (DL) with binary quantization to strike a balance between communication overhead and detection performance in WSNs. We begin by establishing the lower bound of detection error probability for distributed detection using the maximum a posteriori (MAP) criterion. Furthermore, we prove the global optimality of employing identical local quantizers across sensors, thereby maximizing the corresponding Chernoff information. Subsequently, the paper derives the minimum MAP detection error probability (MAPDEP) by inplementing identical binary probabilistic quantizers across the sensors. Moreover, the paper establishes the equivalence between utilizing all quantized data and their average as input to the detector at the fusion center (FC). In particular, we derive the Kullback-Leibler (KL) divergence, which measures the difference between the true posterior probability and output of the proposed detector. Leveraging the MAPDEP and KL divergence as loss functions, the paper proposes model-driven DL method to separately train the probability controller module in the quantizer and the detector module at the FC. Numerical results validate the convergence and effectiveness of the proposed method, which achieves near-optimal performance with reduced complexity for Gaussian hypothesis testing.
Orthogonal time frequency space (OTFS) modulation has emerged as a promising solution to support high-mobility wireless communications, for which, cost-effective data detectors are critical. Although graph neural network (GNN)-based data detectors can achieve decent detection accuracy at reasonable computation cost, they fail to best harness prior information of transmitted data. To further minimize the data detection error of OTFS systems, this letter develops an AMP-GNN-based detector, leveraging the approximate message passing (AMP) algorithm to iteratively improve the symbol estimates of a GNN. Given the inter-Doppler interference (IDI) symbols incur substantial computational overhead to the constructed GNN, learning-based IDI approximation is implemented to sustain low detection complexity. Simulation results demonstrate a remarkable bit error rate (BER) performance achieved by the proposed AMP-GNN-based detector compared to existing baselines. Meanwhile, the proposed IDI approximation scheme avoids a large amount of computations with negligible BER degradation.
Simultaneous wireless information and power transfer (SWIPT) has been proposed to offer communication services and transfer power to the energy harvesting receiver (EHR) concurrently. However, existing works mainly focused on static EHRs, without considering the location uncertainty caused by the movement of EHRs and location estimation errors. To tackle this issue, this paper considers the sensing-assisted SWIPT design in a networked integrated sensing and communication (ISAC) system in the presence of location uncertainty. A two-phase robust design is proposed to reduce the location uncertainty and improve the power transfer efficiency. In particular, each time frame is divided into two phases, i.e., sensing and WPT phases, via time-splitting. The sensing phase performs collaborative sensing to localize the EHR, whose results are then utilized in the WPT phase for efficient WPT. To minimize the power consumption with given communication and power transfer requirements, a two-layer optimization framework is proposed to jointly optimize the time-splitting ratio, coordinated beamforming policy, and sensing node selection. Simulation results validate the effectiveness of the proposed design and demonstrate the existence of an optimal time-splitting ratio for given location uncertainty.
Future wireless networks are envisioned to simultaneously provide high data-rate communication and ubiquitous environment-aware services for numerous users. One promising approach to meet this demand is to employ network-level integrated sensing and communications (ISAC) by jointly designing the signal processing and resource allocation over the entire network. However, to unleash the full potential of network-level ISAC, some critical challenges must be tackled. Among them, interference management is one of the most significant ones. In this article, we build up a bridge between interference mitigation techniques and the corresponding optimization methods, which facilitates efficient interference mitigation in network-level ISAC systems. In particular, we first identify several types of interference in network-level ISAC systems, including self-interference, mutual interference, crosstalk, clutter, and multiuser interference. Then, we present several promising techniques that can be utilized to suppress specific types of interference. For each type of interference, we discuss the corresponding problem formulation and identify the associated optimization methods. Moreover, to illustrate the effectiveness of the proposed interference mitigation techniques, two concrete network-level ISAC systems, namely coordinated cellular network-based and distributed antenna-based ISAC systems, are investigated from interference management perspective. Experiment results indicate that it is beneficial to collaboratively employ different interference mitigation techniques and leverage the network structure to achieve the full potential of network-level ISAC. Finally, we highlight several promising future research directions for the design of ISAC systems.
Sensing performance is typically evaluated by classical radar metrics, such as Cramer-Rao bound and signal-to-clutter-plus-noise ratio. The recent development of the integrated sensing and communication (ISAC) framework motivated the efforts to unify the performance metric for sensing and communication, where mutual information (MI) was proposed as a sensing performance metric with deterministic signals. However, the need of communication in ISAC systems necessitates the transmission of random signals for sensing applications, whereas an explicit evaluation for the sensing mutual information (SMI) with random signals is not yet available in the literature. This paper aims to fill the research gap and investigate the unification of sensing and communication performance metrics. For that purpose, we first derive the explicit expression for the SMI with random signals utilizing random matrix theory. On top of that, we further build up the connections between SMI and traditional sensing metrics, such as ergodic minimum mean square error (EMMSE), ergodic linear minimum mean square error (ELMMSE), and ergodic Bayesian Cram\'{e}r-Rao bound (EBCRB). Such connections open up the opportunity to unify sensing and communication performance metrics, which facilitates the analysis and design for ISAC systems. Finally, SMI is utilized to optimize the precoder for both sensing-only and ISAC applications. Simulation results validate the accuracy of the theoretical results and the effectiveness of the proposed precoding designs.
Recently, innovative model aggregation methods based on knowledge distillation (KD) have been proposed for federated learning (FL). These methods not only improved the robustness of model aggregation over heterogeneous learning environment, but also allowed training heterogeneous models on client devices. However, the scalability of existing methods is not satisfactory, because the training cost on the server increases with the number of clients, which limits their application in large scale systems. Furthermore, the ensemble of existing methods is built from a set of client models initialized from the same checkpoint, causing low diversity. In this paper, we propose a scalable and diversity-enhanced federated distillation scheme, FedSDD, which decouples the training complexity from the number of clients to enhance the scalability, and builds the ensemble from a set of aggregated models with enhanced diversity. In particular, the teacher model in FedSDD is an ensemble built by a small group of aggregated (global) models, instead of all client models, such that the computation cost will not scale with the number of clients. Furthermore, to enhance diversity, FedSDD only performs KD to enhance one of the global models, i.e., the \textit{main global model}, which improves the performance of both the ensemble and the main global model. While partitioning client model into more groups allow building an ensemble with more aggregated models, the convergence of individual aggregated models will be slow down. We introduce the temporal ensembling which leverage the issues, and provide significant improvement with the heterogeneous settings. Experiment results show that FedSDD outperforms other FL methods, including FedAvg and FedDF, on the benchmark datasets.
The next-generation (6G) wireless networks are expected to provide not only seamless and high data-rate communications, but also ubiquitous sensing services. By providing vast spatial degrees of freedom (DoFs), ultra-massive multiple-input multiple-output (UM-MIMO) technology is a key enabler for both sensing and communications in 6G. However, the adoption of UM-MIMO leads to a shift from the far field to the near field in terms of the electromagnetic propagation, which poses novel challenges in system design. Specifically, near-field effects introduce highly non-linear spherical wave models that render existing designs based on plane wave assumptions ineffective. In this paper, we focus on two crucial tasks in sensing and communications, respectively, i.e., localization and channel estimation, and investigate their joint design by exploring the near-field propagation characteristics, achieving mutual benefits between two tasks. In addition, multiple base stations (BSs) are leveraged to collaboratively facilitate a cooperative localization framework. To address the joint channel estimation and cooperative localization problem for near-field UM-MIMO systems, we propose a variational Newtonized near-field channel estimation (VNNCE) algorithm and a Gaussian fusion cooperative localization (GFCL) algorithm. The VNNCE algorithm exploits the spatial DoFs provided by the near-field channel to obtain position-related soft information, while the GFCL algorithm fuses this soft information to achieve more accurate localization. Additionally, we introduce a joint architecture that seamlessly integrates channel estimation and cooperative localization.
Holographic MIMO (HMIMO) is being increasingly recognized as a key enabling technology for 6G wireless systems through the deployment of an extremely large number of antennas within a compact space to fully exploit the potentials of the electromagnetic (EM) channel. Nevertheless, the benefits of HMIMO systems cannot be fully unleashed without an efficient means to estimate the high-dimensional channel, whose distribution becomes increasingly complicated due to the accessibility of the near-field region. In this paper, we address the fundamental challenge of designing a low-complexity Bayes-optimal channel estimator in near-field HMIMO systems operating in unknown EM environments. The core idea is to estimate the HMIMO channels solely based on the Stein's score function of the received pilot signals and an estimated noise level, without relying on priors or supervision that is not feasible in practical deployment. A neural network is trained with the unsupervised denoising score matching objective to learn the parameterized score function. Meanwhile, a principal component analysis (PCA)-based algorithm is proposed to estimate the noise level leveraging the low-rank near-field spatial correlation. Building upon these techniques, we develop a Bayes-optimal score-based channel estimator for fully-digital HMIMO transceivers in a closed form. The optimal score-based estimator is also extended to hybrid analog-digital HMIMO systems by incorporating it into a low-complexity message passing algorithm. The (quasi-) Bayes-optimality of the proposed estimators is validated both in theory and by extensive simulation results. In addition to optimality, it is shown that our proposal is robust to various mismatches and can quickly adapt to dynamic EM environments in an online manner thanks to its unsupervised nature, demonstrating its potential in real-world deployment.
Holographic MIMO (HMIMO) has recently been recognized as a promising enabler for future 6G systems through the use of an ultra-massive number of antennas in a compact space to exploit the propagation characteristics of the electromagnetic (EM) channel. Nevertheless, the promised gain of HMIMO could not be fully unleashed without an efficient means to estimate the high-dimensional channel. Bayes-optimal estimators typically necessitate either a large volume of supervised training samples or a priori knowledge of the true channel distribution, which could hardly be available in practice due to the enormous system scale and the complicated EM environments. It is thus important to design a Bayes-optimal estimator for the HMIMO channels in arbitrary and unknown EM environments, free of any supervision or priors. This work proposes a self-supervised minimum mean-square-error (MMSE) channel estimation algorithm based on powerful machine learning tools, i.e., score matching and principal component analysis. The training stage requires only the pilot signals, without knowing the spatial correlation, the ground-truth channels, or the received signal-to-noise-ratio. Simulation results will show that, even being totally self-supervised, the proposed algorithm can still approach the performance of the oracle MMSE method with an extremely low complexity, making it a competitive candidate in practice.
Sensing performance is typically evaluated by classical metrics, such as Cramer-Rao bound and signal-to-clutter-plus-noise ratio. The recent development of the integrated sensing and communication (ISAC) framework motivated the efforts to unify the metric for sensing and communication, where researchers have proposed to utilize mutual information (MI) to measure the sensing performance with deterministic signals. However, the need to communicate in ISAC systems necessitates the use of random signals for sensing applications and the closed-form evaluation for the sensing mutual information (SMI) with random signals is not yet available in the literature. This paper investigates the achievable performance and precoder design for sensing applications with random signals. For that purpose, we first derive the closed-form expression for the SMI with random signals by utilizing random matrix theory. The result reveals some interesting physical insights regarding the relation between the SMI with deterministic and random signals. The derived SMI is then utilized to optimize the precoder by leveraging a manifold-based optimization approach. The effectiveness of the proposed methods is validated by simulation results.