As the dawn of sixth-generation (6G) networking approaches, it promises unprecedented advancements in communication and automation. Among the leading innovations of 6G is the concept of Zero Touch Networks (ZTNs), aiming to achieve fully automated, self-optimizing networks with minimal human intervention. Despite the advantages ZTNs offer in terms of efficiency and scalability, challenges surrounding transparency, adaptability, and human trust remain prevalent. Concurrently, the advent of Large Language Models (LLMs) presents an opportunity to elevate the ZTN framework by bridging the gap between automated processes and human-centric interfaces. This paper explores the integration of LLMs into ZTNs, highlighting their potential to enhance network transparency and improve user interactions. Through a comprehensive case study on deep reinforcement learning (DRL)-based anti-jamming technique, we demonstrate how LLMs can distill intricate network operations into intuitive, human-readable reports. Additionally, we address the technical and ethical intricacies of melding LLMs with ZTNs, with an emphasis on data privacy, transparency, and bias reduction. Looking ahead, we identify emerging research avenues at the nexus of LLMs and ZTNs, advocating for sustained innovation and interdisciplinary synergy in the domain of automated networks.
Traditional anti-jamming techniques like spread spectrum, adaptive power/rate control, and cognitive radio, have demonstrated effectiveness in mitigating jamming attacks. However, their robustness against the growing complexity of internet-of-thing (IoT) networks and diverse jamming attacks is still limited. To address these challenges, machine learning (ML)-based techniques have emerged as promising solutions. By offering adaptive and intelligent anti-jamming capabilities, ML-based approaches can effectively adapt to dynamic attack scenarios and overcome the limitations of traditional methods. In this paper, we propose a deep reinforcement learning (DRL)-based approach that utilizes state input from realistic wireless network interface cards. We train five different variants of deep Q-network (DQN) agents to mitigate the effects of jamming with the aim of identifying the most sample-efficient, lightweight, robust, and least complex agent that is tailored for power-constrained devices. The simulation results demonstrate the effectiveness of the proposed DRL-based anti-jamming approach against proactive jammers, regardless of their jamming strategy which eliminates the need for a pattern recognition or jamming strategy detection step. Our findings present a promising solution for securing IoT networks against jamming attacks and highlights substantial opportunities for continued investigation and advancement within this field.
This article introduces a new method to improve the dependability of millimeter-wave (mmWave) and terahertz (THz) network services in dynamic outdoor environments. In these settings, line-of-sight (LoS) connections are easily interrupted by moving obstacles like humans and vehicles. The proposed approach, coined as Radar-aided Dynamic blockage Recognition (RaDaR), leverages radar measurements and federated learning (FL) to train a dual-output neural network (NN) model capable of simultaneously predicting blockage status and time. This enables determining the optimal point for proactive handover (PHO) or beam switching, thereby reducing the latency introduced by 5G new radio procedures and ensuring high quality of experience (QoE). The framework employs radar sensors to monitor and track objects movement, generating range-angle and range-velocity maps that are useful for scene analysis and predictions. Moreover, FL provides additional benefits such as privacy protection, scalability, and knowledge sharing. The framework is assessed using an extensive real-world dataset comprising mmWave channel information and radar data. The evaluation results show that RaDaR substantially enhances network reliability, achieving an average success rate of 94% for PHO compared to existing reactive HO procedures that lack proactive blockage prediction. Additionally, RaDaR maintains a superior QoE by ensuring sustained high throughput levels and minimising PHO latency.
The research in the sixth generation of communication networks needs to tackle new challenges in order to meet the requirements of emerging applications in terms of high data rate, low latency, high reliability, and massive connectivity. To this end, the entire communication chain needs to be optimized, including the channel and the surrounding environment, as it is no longer sufficient to control the transmitter and/or the receiver only. Investigating large intelligent surfaces, ultra massive multiple-input-multiple-output, and smart constructive environments will contribute to this direction. In addition, to allow the exchange of high dimensional sensing data between connected intelligent devices, semantic and goal-oriented communications need to be considered for a more efficient and context-aware information encoding. In particular, for multi-agent systems, where agents are collaborating together to achieve a complex task, emergent communications, instead of hard-coded communications, can be learned for more efficient task execution and communication resources use. Moreover, the interaction between information theory and electromagnetism should be explored to better understand the physical limitations of different technologies, e.g, holographic communications. Another new communication paradigm is to consider the end-to-end approach instead of block-by-block optimization, which requires exploiting machine learning theory, non-linear signal processing theory, and non-coherent communications theory. Within this context, we identify ten scientific challenges for rebuilding the theoretical foundations of communications, and we overview each of the challenges while providing research opportunities and open questions for the research community.
We propose an enhanced spatial modulation (SM)-based scheme for indoor visible light communication systems. This scheme enhances the achievable throughput of conventional SM schemes by transmitting higher order complex modulation symbol, which is decomposed into three different parts. These parts carry the amplitude, phase, and quadrant components of the complex symbol, which are then represented by unipolar pulse amplitude modulation (PAM) symbols. Superposition coding is exploited to allocate a fraction of the total power to each part before they are all multiplexed and transmitted simultaneously, exploiting the entire available bandwidth. At the receiver, a two-step decoding process is proposed to decode the active light emitting diode index before the complex symbol is retrieved. It is shown that at higher spectral efficiency values, the proposed modulation scheme outperforms conventional SM schemes with PAM symbols in terms of average symbol error rate (ASER), and hence, enhancing the system throughput. Furthermore, since the performance of the proposed modulation scheme is sensitive to the power allocation factors, we formulated an ASER optimization problem and propose a sub-optimal solution using successive convex programming (SCP). Notably, the proposed algorithm converges after only few iterations, whilst the performance with the optimized power allocation coefficients outperforms both random and fixed power allocation.
In this letter, we investigate the performance of reconfigurable intelligent surface (RIS)-assisted communications, under the assumption of generalized Gaussian noise (GGN), over Rayleigh fading channels. Specifically, we consider an RIS, equipped with $N$ reflecting elements, and derive a novel closed-form expression for the symbol error rate (SER) of arbitrary modulation schemes. The usefulness of the derived new expression is that it can be used to capture the SER performance in the presence of special additive noise distributions such as Gamma, Laplacian, and Gaussian noise. These special cases are also considered and their associated asymptotic SER expressions are derived, and then employed to quantify the achievable diversity order of the system. The theoretical framework is corroborated by numerical results, which reveal that the shaping parameter of the GGN ($\alpha$) has a negligible effect on the diversity order of RIS-assisted systems, particularly for large $\alpha$ values. Accordingly, the maximum achievable diversity order is determined by $N$.
The unprecedented surge of data volume in wireless networks empowered with artificial intelligence (AI) opens up new horizons for providing ubiquitous data-driven intelligent services. Traditional cloud-centric machine learning (ML)-based services are implemented by collecting datasets and training models centrally. However, this conventional training technique encompasses two challenges: (i) high communication and energy cost due to increased data communication, (ii) threatened data privacy by allowing untrusted parties to utilise this information. Recently, in light of these limitations, a new emerging technique, coined as federated learning (FL), arose to bring ML to the edge of wireless networks. FL can extract the benefits of data silos by training a global model in a distributed manner, orchestrated by the FL server. FL exploits both decentralised datasets and computing resources of participating clients to develop a generalised ML model without compromising data privacy. In this article, we introduce a comprehensive survey of the fundamentals and enabling technologies of FL. Moreover, an extensive study is presented detailing various applications of FL in wireless networks and highlighting their challenges and limitations. The efficacy of FL is further explored with emerging prospective beyond fifth generation (B5G) and sixth generation (6G) communication systems. The purpose of this survey is to provide an overview of the state-of-the-art of FL applications in key wireless technologies that will serve as a foundation to establish a firm understanding of the topic. Lastly, we offer a road forward for future research directions.
Visible light communication (VLC) has been recognized as a promising technology for handling the continuously increasing quality of service and connectivity requirements in modern wireless communications, particularly in indoor scenarios. In this context, the present work considers the integration of two distinct modulation schemes, namely spatial modulation (SM) with space time block codes (STBCs), aiming at improving the overall VLC system reliability. Based on this and in order to further enhance the achievable transmission data rate, we integrate quasi-orthogonal STBC (QOSTBC) with SM, since relaxing the orthogonality condition of OSTBC ultimately provides a higher coding rate. Then, we generalize the developed results to any number of active light-emitting diodes (LEDs) and any M-ary pulse amplitude modulation size. Furthermore, we derive a tight and tractable upper bound for the corresponding bit error rate (BER) by considering a simple two-step decoding procedure to detect the indices of the transmitting LEDs and then decode the signal domain symbols. Notably, the obtained results demonstrate that QOSTBC with SM enhances the achievable BER compared to SM with repetition coding (RC-SM). Finally, we compare STBC-SM with both multiple active SM (MASM) and RC-SM in terms of the achievable BER and overall data rate, which further justifies the usefulness of the proposed scheme.
Visible light communication (VLC) technology was introduced as a key enabler for the next generation of wireless networks, mainly thanks to its simple and low-cost implementation. However, several challenges prohibit the realization of the full potentials of VLC, namely, limited modulation bandwidth, ambient light interference, optical diffuse reflection effects, devices non-linearity, and random receiver orientation. On the contrary, centralized machine learning (ML) techniques have demonstrated a significant potential in handling different challenges relating to wireless communication systems. Specifically, it was shown that ML algorithms exhibit superior capabilities in handling complicated network tasks, such as channel equalization, estimation and modeling, resources allocation, and opportunistic spectrum access control, to name a few. Nevertheless, concerns pertaining to privacy and communication overhead when sharing raw data of the involved clients with a server constitute major bottlenecks in the implementation of centralized ML techniques. This has motivated the emergence of a new distributed ML paradigm, namely federated learning (FL), which can reduce the cost associated with transferring raw data, and preserve privacy by training ML models locally and collaboratively at the clients' side. Hence, it becomes evident that integrating FL into VLC networks can provide ubiquitous and reliable implementation of VLC systems. With this motivation, this is the first in-depth review in the literature on the application of FL in VLC networks. To that end, besides the different architectures and related characteristics of FL, we provide a thorough overview on the main design aspects of FL based VLC systems. Finally, we also highlight some potential future research directions of FL that are envisioned to substantially enhance the performance and robustness of VLC systems.
Non-orthogonal multiple access (NOMA) is a technology enabler for the fifth generation and beyond networks, which has shown a great flexibility such that it can be readily integrated with other wireless technologies. In this paper, we investigate the interplay between NOMA and generalized space shift keying (GSSK) in a hybrid NOMA-GSSK (N-GSSK) network. Specifically, we provide a comprehensive analytical framework and propose a novel suboptimal energy-based maximum likelihood (ML) detector for the N-GSSK scheme. The proposed ML decoder exploits the energy of the received signals in order to estimate the active antenna indices. Its performance is investigated in terms of pairwise error probability, bit error rate union bound, and achievable rate. Finally, we establish the validity of our analysis through Monte-Carlo simulations and demonstrate that N-GSSK outperforms conventional NOMA and GSSK, particularly in terms of spectral efficiency.