With the incredible results achieved from generative pre-trained transformers (GPT) and diffusion models, generative AI (GenAI) is envisioned to yield remarkable breakthroughs in various industrial and academic domains. In this paper, we utilize denoising diffusion probabilistic models (DDPM), as one of the state-of-the-art generative models, for probabilistic constellation shaping in wireless communications. While the geometry of constellations is predetermined by the networking standards, probabilistic constellation shaping can help enhance the information rate and communication performance by designing the probability of occurrence (generation) of constellation symbols. Unlike conventional methods that deal with an optimization problem over the discrete distribution of constellations, we take a radically different approach. Exploiting the ``denoise-and-generate'' characteristic of DDPMs, the key idea is to learn how to generate constellation symbols out of noise, ``mimicking'' the way the receiver performs symbol reconstruction. By doing so, we make the constellation symbols sent by the transmitter, and what is inferred (reconstructed) at the receiver become as similar as possible. Our simulations show that the proposed scheme outperforms deep neural network (DNN)-based benchmark and uniform shaping, while providing network resilience as well as robust out-of-distribution performance under low-SNR regimes and non-Gaussian noise. Notably, a threefold improvement in terms of mutual information is achieved compared to DNN-based approach for 64-QAM geometry.
Generative AI has received significant attention among a spectrum of diverse industrial and academic domains, thanks to the magnificent results achieved from deep generative models such as generative pre-trained transformers (GPT) and diffusion models. In this paper, we explore the applications of denoising diffusion probabilistic models (DDPMs) in wireless communication systems under practical assumptions such as hardware impairments (HWI), low-SNR regime, and quantization error. Diffusion models are a new class of state-of-the-art generative models that have already showcased notable success with some of the popular examples by OpenAI and Google Brain. The intuition behind DDPM is to decompose the data generation process over small "denoising" steps. Inspired by this, we propose using denoising diffusion model-based receiver for a practical wireless communication scheme, while providing network resilience in low-SNR regimes, non-Gaussian noise, different HWI levels, and quantization error. We evaluate the reconstruction performance of our scheme in terms of bit error rate (BER) and mean-squared error (MSE). Our results show that 30% and 20% improvement in BER could be achieved compared to deep neural network (DNN)-based receivers in AWGN and non-Gaussian scenarios, respectively.
Reliable energy supply remains a crucial challenge in the Internet of Things (IoT). Although relying on batteries is cost-effective for a few devices, it is neither a scalable nor a sustainable charging solution as the network grows massive. Besides, current energy-saving technologies alone cannot cope, for instance, with the vision of zero-energy devices and the deploy-and-forget paradigm which can unlock a myriad of new use cases. In this context, sustainable radio frequency wireless energy transfer emerges as an attractive solution for efficiently charging the next generation of ultra low power IoT devices. Herein, we highlight that sustainable charging is broader than conventional green charging, as it focuses on balancing economy prosperity and social equity in addition to environmental health. Moreover, we overview the key enablers for realizing this vision and associated challenges. We discuss the economic implications of powering energy transmitters with ambient energy sources, and reveal insights on their optimal deployment. We highlight relevant research challenges and candidate solutions.
RIS is one of the significant technological advancements that will mark next-generation wireless. RIS technology also opens up the possibility of new security threats, since the reflection of impinging signals can be used for malicious purposes. This article introduces the basic concept for a RIS-assisted attack that re-uses the legitimate signal towards a malicious objective. Specific attacks are identified from this base scenario, and the RIS-assisted signal cancellation attack is selected for evaluation as an attack that inherently exploits RIS capabilities. The key takeaway from the evaluation is that an effective attack requires accurate channel information, a RIS deployed in a favorable location (from the point of view of the attacker), and it disproportionately affects legitimate links that already suffer from reduced path loss. These observations motivate specific security solutions and recommendations for future work.
Many emerging Internet of Things (IoT) applications rely on information collected by sensor nodes where the freshness of information is an important criterion. \textit{Age of Information} (AoI) is a metric that quantifies information timeliness, i.e., the freshness of the received information or status update. This work considers a setup of deployed sensors in an IoT network, where multiple unmanned aerial vehicles (UAVs) serve as mobile relay nodes between the sensors and the base station. We formulate an optimization problem to jointly plan the UAVs' trajectory, while minimizing the AoI of the received messages. This ensures that the received information at the base station is as fresh as possible. The complex optimization problem is efficiently solved using a deep reinforcement learning (DRL) algorithm. In particular, we propose a deep Q-network, which works as a function approximation to estimate the state-action value function. The proposed scheme is quick to converge and results in a lower AoI than the random walk scheme. Our proposed algorithm reduces the average age by approximately $25\%$ and requires down to $50\%$ less energy when compared to the baseline scheme.
Future industrial applications will encompass compelling new use cases requiring stringent performance guarantees over multiple key performance indicators (KPI) such as reliability, dependability, latency, time synchronization, security, etc. Achieving such stringent and diverse service requirements necessitates the design of a special-purpose Industrial Internet of Things (IIoT) network comprising a multitude of specialized functionalities and technological enablers. This article proposes an innovative architecture for such a special-purpose 6G IIoT network incorporating seven functional building blocks categorized into: special-purpose functionalities and enabling technologies. The former consists of Wireless Environment Control, Traffic/Channel Prediction, Proactive Resource Management and End-to-End Optimization functions; whereas the latter includes Synchronization and Coordination, Machine Learning and Artificial Intelligence Algorithms, and Auxiliary Functions. The proposed architecture aims at providing a resource-efficient and holistic solution for the complex and dynamically challenging requirements imposed by future 6G industrial use cases. Selected test scenarios are provided and assessed to illustrate cross-functional collaboration and demonstrate the applicability of the proposed architecture in a wireless IIoT network.
In this paper, a multi-objective optimization problem (MOOP) is proposed for maximizing the achievable finite blocklength (FBL) rate while minimizing the utilized channel blocklengths (CBLs) in a reconfigurable intelligent surface (RIS)-assisted short packet communication system. The formulated MOOP has two objective functions namely maximizing the total FBL rate with a target error probability, and minimizing the total utilized CBLs which is directly proportional to the transmission duration. The considered MOOP variables are the base station (BS) transmit power, number of CBLs, and passive beamforming at the RIS. Since the proposed non-convex problem is intractable to solve, the Tchebyshev method is invoked to transform it into a single-objective OP, then the alternating optimization (AO) technique is employed to iteratively obtain optimized parameters in three main sub-problems. The numerical results show a fundamental trade-off between maximizing the achievable rate in the FBL regime and reducing the transmission duration. Also, the applicability of RIS technology is emphasized in reducing the utilized CBLs while increasing the achievable rate significantly.
An reconfigurable intelligent surface (RIS) can be used to establish line-of-sight (LoS) communication when the direct path is compromised, which is a common occurrence in a millimeter wave (mmWave) network. In this paper, we focus on the uplink channel estimation of a such network. We formulate this as a sparse signal recovery problem, by discretizing the angle of arrivals (AoAs) at the base station (BS). On-grid and off-grid AoAs are considered separately. In the on-grid case, we propose an algorithm to estimate the direct and RIS channels. Neural networks trained based on supervised learning is used to estimate the residual angles in the off-grid case, and the AoAs in both cases. Numerical results show the performance gains of the proposed algorithms in both cases.
Ultra reliable low latency communications (URLLC) is a new service class introduced in 5G which is characterized by strict reliability $(1-10^{-5})$ and low latency requirements (1 ms). To meet these requisites, several strategies like overprovisioning of resources and channel-predictive algorithms have been developed. This paper describes the application of a Nonlinear Autoregressive Neural Network (NARNN) as a novel approach to forecast interference levels in a wireless system for the purpose of efficient resource allocation. Accurate interference forecasts also grant the possibility of meeting specific outage probability requirements in URLLC scenarios. Performance of this proposal is evaluated upon the basis of NARNN predictions accuracy and system resource usage. Our proposed approach achieved a promising mean absolute percentage error of 7.8 % on interference predictions and also reduced the resource usage in up to 15 % when compared to a recently proposed interference prediction algorithm.
Leveraging higher frequencies up to THz band paves the way towards a faster network in the next generation of wireless communications. However, such shorter wavelengths are susceptible to higher scattering and path loss forcing the link to depend predominantly on the line-of-sight (LOS) path. Dynamic movement of humans has been identified as a major source of blockages to such LOS links. In this work, we aim to overcome this challenge by predicting human blockages to the LOS link enabling the transmitter to anticipate the blockage and act intelligently. We propose an end-to-end system of infrastructure-mounted LiDAR sensors to capture the dynamics of the communication environment visually, process the data with deep learning and ray casting techniques to predict future blockages. Experiments indicate that the system achieves an accuracy of 87% predicting the upcoming blockages while maintaining a precision of 78% and a recall of 79% for a window of 300 ms.