Abstract:Integrated Sensing and Communications (ISAC) has garnered significant attention as a promising technology for next-generation wireless and vehicular communications. Among candidate waveforms, Orthogonal Frequency Division Multiplexing (OFDM) has been extensively investigated over the past decade for its robustness against frequency-selective fading and its favorable ranging performance. However, the waveform's sensing and communication (S&C) performance depends strongly on the modulation scheme; while variable-amplitude constellations such as quadrature amplitude (QAM) are more efficient for communication, constant-modulus modulations such as phase shift keying (PSK) are more suitable for sensing. Yet, it remains unclear whether these findings persist under power amplifier (PA) nonlinearity. Because OFDM signals exhibit a high peak-to-average power ratio (PAPR), they require highly linear PAs to avoid distortion, which conflicts with radar requirements, where high transmit power is always beneficial for sensing. In this work, we analyze the effect of PA-induced distortions on the sensing task for PSK and QAM constellations. By introducing the Signal-to-Distortion Ratio (SDR), we examine the extent of the distortion limitation on the ranging task. We complement simulation results with a theoretical characterization of the ambiguity function (AF), thereby explicitly demonstrating how distortion artifacts manifest in the zero-Doppler sidelobes (i.e, ranging sidelobes) and the zero-delay sidelobes. Simulations show that PA distortions impose a palpable performance ceiling for both constellations, reshape the AF, and reduce detection probability, diminishing the theoretical advantage of unimodular signaling and further compromising the OFDM sensing performance with non-uniform envelope signals.




Abstract:The support of artificial intelligence (AI) based decision-making is a key element in future 6G networks, where the concept of native AI will be introduced. Moreover, AI is widely employed in different critical applications such as autonomous driving and medical diagnosis. In such applications, using AI as black-box models is risky and challenging. Hence, it is crucial to understand and trust the decisions taken by these models. Tackling this issue can be achieved by developing explainable AI (XAI) schemes that aim to explain the logic behind the black-box model behavior, and thus, ensure its efficient and safe deployment. Recently, we proposed a novel perturbation-based XAI-CHEST framework that is oriented toward channel estimation in wireless communications. The core idea of the XAI-CHEST framework is to identify the relevant model inputs by inducing high noise on the irrelevant ones. This manuscript provides the detailed theoretical foundations of the XAI-CHEST framework. In particular, we derive the analytical expressions of the XAI-CHEST loss functions and the noise threshold fine-tuning optimization problem. Hence the designed XAI-CHEST delivers a smart input feature selection methodology that can further improve the overall performance while optimizing the architecture of the employed model. Simulation results show that the XAI-CHEST framework provides valid interpretations, where it offers an improved bit error rate performance while reducing the required computational complexity in comparison to the classical DL-based channel estimation.
Abstract:Research into 6G networks has been initiated to support a variety of critical artificial intelligence (AI) assisted applications such as autonomous driving. In such applications, AI-based decisions should be performed in a real-time manner. These decisions include resource allocation, localization, channel estimation, etc. Considering the black-box nature of existing AI-based models, it is highly challenging to understand and trust the decision-making behavior of such models. Therefore, explaining the logic behind those models through explainable AI (XAI) techniques is essential for their employment in critical applications. This manuscript proposes a novel XAI-based channel estimation (XAI-CHEST) scheme that provides detailed reasonable interpretability of the deep learning (DL) models that are employed in doubly-selective channel estimation. The aim of the proposed XAI-CHEST scheme is to identify the relevant model inputs by inducing high noise on the irrelevant ones. As a result, the behavior of the studied DL-based channel estimators can be further analyzed and evaluated based on the generated interpretations. Simulation results show that the proposed XAI-CHEST scheme provides valid interpretations of the DL-based channel estimators for different scenarios.