Due to the non-ideality of analog components, transceivers experience high levels of hardware imperfections, like in-phase and quadrature imbalance (IQI), which manifests itself as the mismatches of amplitude and phase between the I and Q branches. Unless proper mitigated, IQI has an important and negative impact on the reliability and efficiency of high-frequency and high-data-rate systems, such as terahertz wireless networks. Recognizing this, the current paper presents an intelligent transmitter (TX) and an intelligent receiver (RX) architecture that by employing machine learning (ML) methodologies is capable to fully-mitigate the impact of IQI without performing IQI coefficients estimation. They key idea lies on co-training the TX mapper's and RX demapper in order to respectively design a constellation and detection scheme that takes accounts for IQI. Two training approaches are implemented, namely: i) conventional that requires a considerable amount of data for training, and ii) a reinforcement learning based one, which demands a shorter dataset in comparison to the former. The feasibility and efficiency of the proposed architecture and training approaches are validated through respective Monte Carlo simulations.
Until recently, researchers used machine learning methods to compensate for hardware imperfections at the symbol level, indicating that optimum radio-frequency transceiver performance is possible. Nevertheless, such approaches neglect the error correcting codes used in wireless networks, which inspires machine learning (ML)-approaches that learn and minimise hardware imperfections at the bit level. In the present work, we evaluate a graph neural network (GNN)-based intelligent detector's in-phase and quadrature imbalance (IQI) mitigation capabilities. We focus on a high-frequency, high-directional wireless system where IQI affects both the transmitter (TX) and the receiver (RX). The TX uses a GNN-based decoder, whilst the RX uses a linear error correcting algorithm. The bit error rate (BER) is computed using appropriate Monte Carlo simulations to quantify performance. Finally, the outcomes are compared to both traditional systems using conventional detectors and wireless systems using belief propagation based detectors. Due to the utilization of graph neural networks, the proposed algorithm is highly scalable with few training parameters and is able to adapt to various code parameters.
This paper investigates the usage of hybrid automatic repeat request (HARQ) protocols for power-efficient and reliable communications over free space optical (FSO) links. By exploiting the large coherence time of the FSO channel, the proposed transmission schemes combat turbulence-induced fading by retransmitting the failed packets in the same coherence interval. To assess the performance of the presented HARQ technique, we extract a theoretical framework for the outage performance. In more detail, a closed-form expression for the outage probability (OP) is reported and an approximation for the high signal-to-noise ratio (SNR) region is extracted. Building upon the theoretical framework, we formulate a transmission power allocation problem throughout the retransmission rounds. This optimization problem is solved numerically through the use of an iterative algorithm. In addition, the average throughput of the HARQ schemes under consideration is examined. Simulation results validate the theoretical analysis under different turbulence conditions and demonstrate the performance improvement, in terms of both OP and throughput, of the proposed HARQ schemes compared to fixed transmit power HARQ benchmarks.
When fully implemented, sixth generation (6G) wireless systems will constitute intelligent wireless networks that enable not only ubiquitous communication but also high-accuracy localization services. They will be the driving force behind this transformation by introducing a new set of characteristics and service capabilities in which location will coexist with communication while sharing available resources. To that purpose, this survey investigates the envisioned applications and use cases of localization in future 6G wireless systems, while analyzing the impact of the major technology enablers. Afterwards, system models for millimeter wave, terahertz and visible light positioning that take into account both line-of-sight (LOS) and non-LOS channels are presented, while localization key performance indicators are revisited alongside mathematical definitions. Moreover, a detailed review of the state of the art conventional and learning-based localization techniques is conducted. Furthermore, the localization problem is formulated, the wireless system design is considered and the optimization of both is investigated. Finally, insights that arise from the presented analysis are summarized and used to highlight the most important future directions for localization in 6G wireless systems.
This paper presents a quantified assessment of the physical layer security capabilities of reconfigurable intelligent surface (RIS)-aided wireless systems under eavesdropping. Specifically, we derive a closed-form expression for the ergodic secrecy capacity (ESC) that is adaptable to different types of fading and RIS size. The channels between the transmitter (TX) and RIS, the RIS and legitimate receiver as well as the TX and eavesdropper are assumed to follow independent mixture Gamma (MG) distributions. Note that MG is capable of modeling a large variety of well-known distributions, including Gaussian, Rayleigh, Nakagami-m, Rice, and others. The results reveal that as the RIS size increases, although the legitimate links diversity order increases, the ESC gain decreases.
Reconfigurable intelligent surface (RIS)-assisted unmanned areal vehicles (UAV) communications have been identified as a key enabler of a number of next-generation applications. However, to the best of our knowledge, there is no generalized framework for the quantification of the throughput performance of RIS-assisted UAV systems. Motivated by this, in this paper, we present a comprehensive system model that accounts for the impact of multipath fading, which is modeled by means of mixture gamma, transceiver hardware imperfections, and stochastic beam disorientation and misalignment in order to examine the throughput performance of a RIS-assisted UAV wireless system. In this direction, we present a novel closed-form expression for the system's throughput for two scenarios: i) in the presence and ii) in the absence of disorientation and misalignment. Interestingly, our results reveal the importance of accurate modeling of the aforementioned phenomena as well as the existence of an optimal transmission spectral efficiency.
Terahertz (THz) wireless networks are expected to catalyze the beyond fifth generation (B5G) era. However, due to the directional nature and the line-of-sight demand of THz links, as well as the ultra-dense deployment of THz networks, a number of challenges that the medium access control (MAC) layer needs to face are created. In more detail, the need of rethinking user association and resource allocation strategies by incorporating artificial intelligence (AI) capable of providing "real-time" solutions in complex and frequently changing environments becomes evident. Moreover, to satisfy the ultra-reliability and low-latency demands of several B5G applications, novel mobility management approaches are required. Motivated by this, this article presents a holistic MAC layer approach that enables intelligent user association and resource allocation, as well as flexible and adaptive mobility management, while maximizing systems' reliability through blockage minimization. In more detail, a fast and centralized joint user association, radio resource allocation, and blockage avoidance by means of a novel metaheuristic-machine learning framework is documented, that maximizes the THz networks performance, while minimizing the association latency by approximately three orders of magnitude. To support, within the access point (AP) coverage area, mobility management and blockage avoidance, a deep reinforcement learning (DRL) approach for beam-selection is discussed. Finally, to support user mobility between coverage areas of neighbor APs, a proactive hand-over mechanism based on AI-assisted fast channel prediction is~reported.
In this paper, we introduce a theoretical framework for analyzing the performance of multi-reconfigurable intelligence surface (RIS) empowered terahertz (THz) wireless systems subject to turbulence and stochastic beam misalignment. In more detail, we extract a closed-form expression for the outage probability that quantifies the joint impact of turbulence and misalignment as well as the effect of transceivers' hardware imperfections. Our results highlight the importance of accurately modeling both turbulence and misalignment when assessing the performance of multi-RIS-empowered THz wireless systems.
In this paper, we analyze the performance of a reconfigurable intelligent surface (RIS)-assisted unmanned aerial vehicle (UAV) wireless system that is affected by mixture-gamma small-scale fading, stochastic disorientation, and misalignment, as well as transceivers hardware imperfections. First, we statistically characterize the end-to-end channel for both cases, i.e., in the absence as well as in the presence of disorientation and misalignment, by extracting closed-form formulas for the probability density function (PDF) and the cumulative distribution function (CDF). Building on the aforementioned expressions, we extract novel closed-form expressions for the outage probability (OP) in the absence and the presence of disorientation and misalignment as well as hardware imperfections. In addition, high signal-to-noise ratio OP approximations are derived, leading to the extraction of the diversity order. Finally, an OP floor due to disorientation and misalignment is presented.
Next generation in-to-out-of body biomedical applications have adopted optical wireless communications (OWCs). However, by delving into the published literature, a gap is recognised in modeling the in-to-out-of channel, since most published contributions neglect the particularities of different type of tissues. Towards this direction, in this paper we present a novel pathloss and scattering models for in-to-out-of OWC links. Specifically, we derive extract analytical expressions that accurately describe the absorption of the five main tissues' constituents, namely fat, water, melanin, oxygenated and de-oxygenated blood. Moreover, we formulate a model for the calculation of the absorption coefficient of any generic biological tissue. Next, by incorporating the impact of scattering in the aforementioned model we formulate the complete pathloss model. The developed theoretical framework is verified by means of comparisons between the estimated pathloss and experimental measurements from independent research works. Finally, we illustrate the accuracy of the theoretical framework in estimating the optical properties of any generic tissue based on its constitution. The extracted channel model is capable of boosting the design of optimized communication protocols for a plethora of biomedical applications.