Abstract:Low-altitude wireless networks (LAWNs) have the potential to revolutionize communications by supporting a range of applications, including urban parcel delivery, aerial inspections and air taxis. However, compared with traditional wireless networks, LAWNs face unique security challenges due to low-altitude operations, frequent mobility and reliance on unlicensed spectrum, making it more vulnerable to some malicious attacks. In this paper, we investigate some large artificial intelligence model (LAM)-enabled solutions for secure communications in LAWNs. Specifically, we first explore the amplified security risks and important limitations of traditional AI methods in LAWNs. Then, we introduce the basic concepts of LAMs and delve into the role of LAMs in addressing these challenges. To demonstrate the practical benefits of LAMs for secure communications in LAWNs, we propose a novel LAM-based optimization framework that leverages large language models (LLMs) to generate enhanced state features on top of handcrafted representations, and to design intrinsic rewards accordingly, thereby improving reinforcement learning performance for secure communication tasks. Through a typical case study, simulation results validate the effectiveness of the proposed framework. Finally, we outline future directions for integrating LAMs into secure LAWN applications.
Abstract:Unmanned aerial vehicle (UAV) communication is of crucial importance in realizing heterogeneous practical wireless application scenarios. However, the densely populated users and diverse services with high data rate demands has triggered an increasing scarcity of UAV spectrum utilization. To tackle this problem, it is promising to incorporate the underutilized unlicensed spectrum with the licensed spectrum to boost network capacity. However, the openness of unlicensed spectrum makes UAVs susceptible to security threats from potential jammers. Therefore, a spectrum sharing UAV network coexisting with licensed cellular network and unlicensed Wi-Fi network is considered with the anti-jamming technique in this paper. The sum rate maximization of the secondary network is studied by jointly optimizing the transmit power, subchannel allocation, and UAV trajectory. We first decompose the challenging non-convex problem into two subproblems, 1) the joint power and subchannel allocation and 2) UAV trajectory design subproblems. A low-complexity iterative algorithm is proposed in a alternating optimization manner over these two subproblems to solve the formulated problem. Specifically, the Lagrange dual decomposition is exploited to jointly optimize the transmit power and subchannel allocation iteratively. Then, an efficient iterative algorithm capitalizing on successive convex approximation is designed to get a suboptimal solution for UAV trajectory. Simulation results demonstrate that our proposed algorithm can significantly improve the sum transmission rate compared with the benchmark schemes.
Abstract:With the rapid development of Generative Artificial Intelligence (GAI) technology, Generative Diffusion Models (GDMs) have shown significant empowerment potential in the field of wireless networks due to advantages, such as noise resistance, training stability, controllability, and multimodal generation. Although there have been multiple studies focusing on GDMs for wireless networks, there is still a lack of comprehensive reviews on their technological evolution. Motivated by this, we systematically explore the application of GDMs in wireless networks. Firstly, starting from mathematical principles, we analyze technical advantages of GDMs and present six representative models. Furthermore, we propose the multi-layer wireless network architecture including sensing layer, transmission layer, application layer, and security plane. We also introduce the core mechanisms of GDM at each of the layers. Subsequently, we conduct a rigorous review on existing GDM-based schemes, with a focus on analyzing their innovative points, the role of GDMs, strengths, and weaknesses. Ultimately, we extract key challenges and provide potential solutions, with the aim of providing directional guidance for future research in this field.
Abstract:Direct-satellite-to-device (DS2D) communication is emerging as a promising solution for global mobile service extension, leveraging the deployment of satellite constellations. However, the challenge of managing DS2D connectivity for multi-constellations becomes outstanding, including high interference and frequent handovers caused by multi-coverage overlap and rapid satellite movement. Moreover, existing approaches primarily operate within single-constellation shell, which inherently limits the ability to exploit the vast potential of multi-constellation connectivity provision, resulting in suboptimal DS2D service performances. To address these challenges, this article proposes a Constellation as a Service (CaaS) framework, which treats the entire multi-constellation infrastructure as a shared resource pool and dynamically forms optimal sub-constellations (SCs) for each DS2D service region. The formation of each SC integrates satellites from various orbits to provide tailored connectivity based on user demands, guided by two innovative strategies: predictive satellite beamforming using generative artificial intelligence (GenAI) and pre-configured handover path for efficient satellite access and mobility management. Simulation results demonstrate that CaaS significantly improves satellite service rates while reducing handover overhead, making it an efficient and continuable solution for managing DS2D connectivity in multi-constellation environments.
Abstract:Recent studies have focused on leveraging large-scale artificial intelligence (LAI) models to improve semantic representation and compression capabilities. However, the substantial computational demands of LAI models pose significant challenges for real-time communication scenarios. To address this, this paper proposes utilizing knowledge distillation (KD) techniques to extract and condense knowledge from LAI models, effectively reducing model complexity and computation latency. Nevertheless, the inherent complexity of LAI models leads to prolonged inference times during distillation, while their lack of channel awareness compromises the distillation performance. These limitations make standard KD methods unsuitable for task-oriented semantic communication scenarios. To address these issues, we propose a fast distillation method featuring a pre-stored compression mechanism that eliminates the need for repetitive inference, significantly improving efficiency. Furthermore, a channel adaptive module is incorporated to dynamically adjust the transmitted semantic information based on varying channel conditions, enhancing communication reliability and adaptability. In addition, an information bottleneck-based loss function is derived to guide the fast distillation process. Simulation results verify that the proposed scheme outperform baselines in term of task accuracy, model size, computation latency, and training data requirements.
Abstract:Six-dimensional movable antenna (6DMA) is an innovative and transformative technology to improve wireless network capacity by adjusting the 3D positions and 3D rotations of antennas/surfaces (sub-arrays) based on the channel spatial distribution. For optimization of the antenna positions and rotations, the acquisition of statistical channel state information (CSI) is essential for 6DMA systems. In this paper, we unveil for the first time a new \textbf{\textit{directional sparsity}} property of the 6DMA channels between the base station (BS) and the distributed users, where each user has significant channel gains only with a (small) subset of 6DMA position-rotation pairs, which can receive direct/reflected signals from the user. By exploiting this property, a covariance-based algorithm is proposed for estimating the statistical CSI in terms of the average channel power at a small number of 6DMA positions and rotations. Based on such limited channel power estimation, the average channel powers for all possible 6DMA positions and rotations in the BS movement region are reconstructed by further estimating the multi-path average power and direction-of-arrival (DOA) vectors of all users. Simulation results show that the proposed directional sparsity-based algorithm can achieve higher channel power estimation accuracy than existing benchmark schemes, while requiring a lower pilot overhead.
Abstract:Accurate channel prediction is essential in massive multiple-input multiple-output (m-MIMO) systems to improve precoding effectiveness and reduce the overhead of channel state information (CSI) feedback. However, existing methods often suffer from accumulated prediction errors and poor generalization to dynamic wireless environments. Large language models (LLMs) have demonstrated remarkable modeling and generalization capabilities in tasks such as time series prediction, making them a promising solution. Nevertheless, a significant modality gap exists between the linguistic knowledge embedded in pretrained LLMs and the intrinsic characteristics of CSI, posing substantial challenges for their direct application to channel prediction. Moreover, the large parameter size of LLMs hinders their practical deployment in real-world communication systems with stringent latency constraints. To address these challenges, we propose a novel channel prediction framework based on semantically aligned large models, referred to as CSI-ALM, which bridges the modality gap between natural language and channel information. Specifically, we design a cross-modal fusion module that aligns CSI representations . Additionally, we maximize the cosine similarity between word embeddings and CSI embeddings to construct semantic cues. To reduce complexity and enable practical implementation, we further introduce a lightweight version of the proposed approach, called CSI-ALM-Light. This variant is derived via a knowledge distillation strategy based on attention matrices. Extensive experimental results demonstrate that CSI-ALM achieves a 1 dB gain over state-of-the-art deep learning methods. Moreover, under limited training data conditions, CSI-ALM-Light, with only 0.34M parameters, attains performance comparable to CSI-ALM and significantly outperforms conventional deep learning approaches.
Abstract:Massive random access is an important technology for achieving ultra-massive connectivity in next-generation wireless communication systems. It aims to address key challenges during the initial access phase, including active user detection (AUD), channel estimation (CE), and data detection (DD). This paper examines massive access in massive multiple-input multiple-output (MIMO) systems, where deep learning is used to tackle the challenging AUD, CE, and DD functions. First, we introduce a Transformer-AUD scheme tailored for variable pilot-length access. This approach integrates pilot length information and a spatial correlation module into a Transformer-based detector, enabling a single model to generalize across various pilot lengths and antenna numbers. Next, we propose a generative diffusion model (GDM)-driven iterative CE and DD framework. The GDM employs a score function to capture the posterior distributions of massive MIMO channels and data symbols. Part of the score function is learned from the channel dataset via neural networks, while the remaining score component is derived in a closed form by applying the symbol prior constellation distribution and known transmission model. Utilizing these posterior scores, we design an asynchronous alternating CE and DD framework that employs a predictor-corrector sampling technique to iteratively generate channel estimation and data detection results during the reverse diffusion process. Simulation results demonstrate that our proposed approaches significantly outperform baseline methods with respect to AUD, CE, and DD.
Abstract:To support emerging language-based applications using dispersed and heterogeneous computing resources, the hybrid language model (HLM) offers a promising architecture, where an on-device small language model (SLM) generates draft tokens that are validated and corrected by a remote large language model (LLM). However, the original HLM suffers from substantial communication overhead, as the LLM requires the SLM to upload the full vocabulary distribution for each token. Moreover, both communication and computation resources are wasted when the LLM validates tokens that are highly likely to be accepted. To overcome these limitations, we propose communication-efficient and uncertainty-aware HLM (CU-HLM). In CU-HLM, the SLM transmits truncated vocabulary distributions only when its output uncertainty is high. We validate the feasibility of this opportunistic transmission by discovering a strong correlation between SLM's uncertainty and LLM's rejection probability. Furthermore, we theoretically derive optimal uncertainty thresholds and optimal vocabulary truncation strategies. Simulation results show that, compared to standard HLM, CU-HLM achieves up to 206$\times$ higher token throughput by skipping 74.8% transmissions with 97.4% vocabulary compression, while maintaining 97.4% accuracy.
Abstract:Recent advancements in large language models (LLMs) have led to a significant interest in deploying LLMempowered algorithms for wireless communication networks. Meanwhile, open radio access network (O-RAN) techniques offer unprecedented flexibility, with the non-real-time (non-RT) radio access network (RAN) intelligent controller (RIC) (non-RT RIC) and near-real-time (near-RT) RIC (near-RT RIC) components enabling intelligent resource management across different time scales. In this paper, we propose the LLM empowered hierarchical RIC (LLM-hRIC) framework to improve the collaboration between RICs. This framework integrates LLMs with reinforcement learning (RL) for efficient network resource management. In this framework, LLMs-empowered non-RT RICs provide strategic guidance and high-level policies based on environmental context. Concurrently, RL-empowered near-RT RICs perform low-latency tasks based on strategic guidance and local near-RT observation. We evaluate the LLM-hRIC framework in an integrated access and backhaul (IAB) network setting. Simulation results demonstrate that the proposed framework achieves superior performance. Finally, we discuss the key future challenges in applying LLMs to O-RAN.