This paper investigates a deep reinforcement learning (DRL)-based approach for managing channel access in wireless networks. Specifically, we consider a scenario in which an intelligent user device (iUD) shares a time-varying uplink wireless channel with several fixed transmission schedule user devices (fUDs) and an unknown-schedule malicious jammer. The iUD aims to harmoniously coexist with the fUDs, avoid the jammer, and adaptively learn an optimal channel access strategy in the face of dynamic channel conditions, to maximize the network's sum cross-layer achievable rate (SCLAR). Through extensive simulations, we demonstrate that when we appropriately define the state space, action space, and rewards within the DRL framework, the iUD can effectively coexist with other UDs and optimize the network's SCLAR. We show that the proposed algorithm outperforms the tabular Q-learning and a fully connected deep neural network approach.
The Industrial Internet of Things (IIoT) enables industries to build large interconnected systems utilizing various technologies that require high data rates. Terahertz (THz) communication is envisioned as a candidate technology for achieving data rates of several terabits-per-second (Tbps). Despite this, establishing a reliable communication link at THz frequencies remains a challenge due to high pathloss and molecular absorption. To overcome these limitations, this paper proposes using intelligent reconfigurable surfaces (IRSs) with THz communications to enable future smart factories for the IIoT. In this paper, we formulate the power allocation and joint IIoT device and IRS association (JIIA) problem, which is a mixed-integer nonlinear programming (MINLP) problem. {Furthermore, the JIIA problem aims to maximize the sum rate with imperfect channel state information (CSI).} To address this non-deterministic polynomial-time hard (NP-hard) problem, we decompose the problem into multiple sub-problems, which we solve iteratively. Specifically, we propose a Gale-Shapley algorithm-based JIIA solution to obtain stable matching between uplink and downlink IRSs. {We validate the proposed solution by comparing the Gale-Shapley-based JIIA algorithm with exhaustive search (ES), greedy search (GS), and random association (RA) with imperfect CSI.} The complexity analysis shows that our algorithm is more efficient than the ES.
This paper studies the statistical characterization of ground-to-air (G2A) and reconfigurable intelligent surface (RIS)-assisted air-to-ground (A2G) communications with unmanned aerial vehicles (UAVs) in terrestrial and non-terrestrial networks under the impact of channel aging. We first model the G2A and A2G signal-to-noise ratios (SNRs) as non-central complex Gaussian quadratic random variables (RVs) and derive their exact probability density functions, offering a unique characterization for the A2G SNR as the product of two scaled non-central chi-square RVs. Moreover, we also find that, for a large number of RIS elements, the RIS-assisted A2G channel can be characterized as a single Rician fading channel. Our results reveal the presence of channel hardening in A2G communication under low UAV speeds, where we derive the maximum target spectral efficiency (SE) for a system to maintain a consistent required outage level. Meanwhile, high UAV speeds, exceeding 50 m/s, lead to a significant performance degradation, which cannot be mitigated by increasing the number of RIS elements.
This letter considers a UAV aiding communication between a ground transmitter and a ground receiver in the presence of co-channel interference. A discrete-time Markov process is adopted to model the complex nature of the Air-to-Ground (A2G) channel, including the occurrence of Line-of-Sight, Non-Line-of-Sight, and blockage events. Moreover, an adaptive phase-shift-enabled Reconfigurable Intelligent Surface (RIS) is deployed to combat A2G blockage events. Novel frameworks based on the shadowed Rician distribution are proposed to derive closed-form expressions for Ground-to-Air/A2G SINR' distributions. Numerical results show that RISs with large numbers of elements, e.g., 256 RIS elements, improve end-to-end Outage Probability (OP) and reduce blockages.
Terahertz (THz) communication is a promising technology for future wireless communications, offering data rates of up to several terabits-per-second (Tbps). However, the range of THz band communications is often limited by high pathloss and molecular absorption. To overcome these challenges, this paper proposes intelligent reconfigurable surfaces (IRSs) to enhance THz communication systems. Specifically, we introduce an angle-based trigonometric channel model to evaluate the effectiveness of IRS-aided THz networks. Additionally, to maximize the sum rate, we formulate the source-IRS-destination matching problem, which is a mixed-integer nonlinear programming (MINLP) problem. To solve this non-deterministic polynomial-time hard (NP-hard) problem, the paper proposes a Gale-Shapley-based solution that obtains stable matches between sources and IRSs, as well as between destinations and IRSs in the first and second sub-problems, respectively.
At the confluence of 6G, deep learning (DL), and natural language processing (NLP), DL-enabled text semantic communication (SemCom) has emerged as a 6G enabler by promising to minimize bandwidth consumption, transmission delay, and power usage. Among text SemCom techniques, \textit{DeepSC} is a popular scheme that leverages advancements in DL and NLP to reliably transmit semantic information in low signal-to-noise ratio (SNR) regimes. To understand the fundamental limits of such a transmission paradigm, our recently developed theory \cite{Getu'23_Performance_Limits} predicted the performance limits of DeepSC under radio frequency interference (RFI). Although these limits were corroborated by simulations, trained deep networks can defy classical statistical wisdom, and hence extensive computer experiments are needed to validate our theory. Accordingly, this empirical work follows concerning the training and testing of DeepSC using the proceedings of the European Parliament (Europarl) dataset. Employing training, validation, and testing sets \textit{tokenized and vectorized} from Europarl, we train the DeepSC architecture in Keras 2.9 with TensorFlow 2.9 as a backend and test it under Gaussian multi-interferer RFI received over Rayleigh fading channels. Validating our theory, the testing results corroborate that DeepSC produces semantically irrelevant sentences as the number of Gaussian RFI emitters gets very large. Therefore, a fundamental 6G design paradigm for \textit{interference-resistant and robust SemCom} (IR$^2$ SemCom) is needed.
Although deep learning (DL) has led to several breakthroughs in many disciplines as diverse as chemistry, computer science, electrical engineering, mathematics, medicine, neuroscience, and physics, a comprehensive understanding of why and how DL is empirically successful remains fundamentally elusive. To attack this fundamental problem and unravel the mysteries behind DL's empirical successes, significant innovations toward a unified theory of DL have been made. These innovations encompass nearly fundamental advances in optimization, generalization, and approximation. Despite these advances, however, no work to date has offered a way to quantify the testing performance of a DL-based algorithm employed to solve a pattern classification problem. To overcome this fundamental challenge in part, this paper exposes the fundamental testing performance limits of DL-based binary classifiers trained with hinge loss. For binary classifiers that are based on deep rectified linear unit (ReLU) feedforward neural networks (FNNs) and ones that are based on deep FNNs with ReLU and Tanh activation, we derive their respective novel asymptotic testing performance limits. The derived testing performance limits are validated by extensive computer experiments.
In this paper, we propose a simultaneous transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) and energy buffer aided multiple-input single-output (MISO) simultaneous wireless information and power transfer (SWIPT) non-orthogonal multiple access (NOMA) system, which consists of a STAR-RIS, an access point (AP), and reflection users and transmission users with energy buffers. In the proposed system, the multi-antenna AP can transmit information and energy to several single-antenna reflection and transmission users simultaneously in a NOMA fashion, where the power transfer and information transmission states of the users are modeled using Markov chains. The reflection and transmission users harvest and store the energy in energy buffers as additional power supplies. The power outage probability, information outage probability, sum throughput, and joint outage probability closed-form expressions of the proposed system are derived over Nakagami-m fading channels, which are validated via simulations. Results demonstrate that the proposed system achieves better performance in comparison to the STAR-RIS aided MISO SWIPT-NOMA buffer-less, conventional RIS and energy buffer aided MISO SWIPT-NOMA, and STAR-RIS and energy buffer aided MISO SWIPT-time-division multiple access (TDMA) systems. Furthermore, a particle swarm optimization based power allocation (PSO-PA) algorithm is designed to maximize the sum throughput with a constraint on the joint outage probability. Simulation results illustrate that the proposed PSO-PA algorithm can achieve an improved sum throughput performance of the proposed system.
Motivated by climate change, increasing industrialization and energy reliability concerns, the smart grid is set to revolutionize traditional power systems. Moreover, the exponential annual rise in number of grid-connected users and emerging key players e.g. electric vehicles strain the limited radio resources, which stresses the need for novel and scalable resource management techniques. Digital twin is a cutting-edge virtualization technology that has shown great potential by offering solutions for inherent bottlenecks in traditional wireless networks. In this article, we set the stage for various roles digital twinning can fulfill by optimizing congested radio resources in a proactive and resilient smart grid. Digital twins can help smart grid networks through real-time monitoring, advanced precise modeling and efficient radio resource allocation for normal operations and service restoration following unexpected events. However, reliable real-time communications, intricate abstraction abilities, interoperability with other smart grid technologies, robust computing capabilities and resilient security schemes are some open challenges for future work on digital twins.
Semantic communication (SemCom) aims to convey the meaning behind a transmitted message by transmitting only semantically-relevant information. This semantic-centric design helps to minimize power usage, bandwidth consumption, and transmission delay. SemCom and goal-oriented SemCom (or effectiveness-level SemCom) are therefore promising enablers of 6G and developing rapidly. Despite the surge in their swift development, the design, analysis, optimization, and realization of robust and intelligent SemCom as well as goal-oriented SemCom are fraught with many fundamental challenges. One of the challenges is that the lack of unified/universal metrics of SemCom and goal-oriented SemCom can stifle research progress on their respective algorithmic, theoretical, and implementation frontiers. Consequently, this survey paper documents the existing metrics -- scattered in many references -- of wireless SemCom, optical SemCom, quantum SemCom, and goal-oriented wireless SemCom. By doing so, this paper aims to inspire the design, analysis, and optimization of a wide variety of SemCom and goal-oriented SemCom systems. This article also stimulates the development of unified/universal performance assessment metrics of SemCom and goal-oriented SemCom, as the existing metrics are purely statistical and hardly applicable to reasoning-type tasks that constitute the heart of 6G and beyond.