Reconfigurable intelligent surface (RIS) has become a promising technology to realize the programmable wireless environment via steering the incident signal in fully customizable ways. However, a major challenge in RIS-aided communication systems is the simultaneous design of the precoding matrix at the base station (BS) and the phase shifting matrix of the RIS elements. This is mainly attributed to the highly non-convex optimization space of variables at both the BS and the RIS, and the diversity of communication environments. Generally, traditional optimization methods for this problem suffer from the high complexity, while existing deep learning based methods are lack of robustness in various scenarios. To address these issues, we introduce a gradient-based manifold meta learning method (GMML), which works without pre-training and has strong robustness for RIS-aided communications. Specifically, the proposed method fuses meta learning and manifold learning to improve the overall spectral efficiency, and reduce the overhead of the high-dimensional signal process. Unlike traditional deep learning based methods which directly take channel state information as input, GMML feeds the gradients of the precoding matrix and phase shifting matrix into neural networks. Coherently, we design a differential regulator to constrain the phase shifting matrix of the RIS. Numerical results show that the proposed GMML can improve the spectral efficiency by up to 7.31\%, and speed up the convergence by 23 times faster compared to traditional approaches. Moreover, they also demonstrate remarkable robustness and adaptability in dynamic settings.
Reconfigurable intelligent surface (RIS) has great potential to improve the performance of integrated sensing and communication (ISAC) systems, especially in scenarios where line-of-sight paths between the base station and users are blocked. However, the spectral efficiency (SE) of RIS-aided ISAC uplink transmissions may be drastically reduced by the heavy burden of pilot overhead for realizing sensing capabilities. In this paper, we tackle this bottleneck by proposing a superimposed symbol scheme, which superimposes sensing pilots onto data symbols over the same time-frequency resources. Specifically, we develop a structure-aware sparse Bayesian learning framework, where decoded data symbols serve as side information to enhance sensing performance and increase SE. To meet the low-latency requirements of emerging ISAC applications, we further propose a low-complexity simultaneous communication and localization algorithm for multiple users. This algorithm employs the unitary approximate message passing in the Bayesian learning framework for initial angle estimate, followed by iterative refinements through reduced-dimension matrix calculations. Moreover, the sparse code multiple access technology is incorporated into this iterative framework for accurate data detection which also facilitates localization. Numerical results show that the proposed superimposed symbol-based scheme empowered by the developed algorithm can achieve centimeter-level localization while attaining up to $96\%$ of the SE of conventional communications without sensing capabilities. Moreover, compared to other typical ISAC schemes, the proposed superimposed symbol scheme can provide an effective throughput improvement over $133\%$.
The millimeter wave (mmWave) has received considerable interest due to its expansive bandwidth and high frequency. However, a noteworthy challenge arises from its vulnerability to blockages, leading to reduced coverage and achievable rates. To address these limitations, a potential solution is to deploy distributed reconfigurable intelligent surfaces (RISs), which comprise many low-cost and passively reflected elements, and can facilitate the establishment of extra communication links. In this paper, we leverage stochastic geometry to investigate the ergodic coverage probability and the achievable rate in both distributed RISs-assisted single-cell and multi-cell mmWave wireless communication systems. Specifically, we first establish the system model considering the stochastically distributed blockages, RISs and users by the Poisson point process. Then we give the association criterion and derive the association probabilities, the distance distributions, and the conditional coverage probabilities for two cases of associations between base stations and users without or with RISs. Finally, we use Campbell's theorem and the total probability theorem to obtain the closed-form expressions of the ergodic coverage probability and the achievable rate. Simulation results verify the effectiveness of our analysis method, and demonstrate that by deploying distributed RISs, the ergodic coverage probability is significantly improved by approximately 50%, and the achievable rate is increased by more than 1.5 times.
In millimeter-wave communications, large-scale antenna arrays are commonly employed to mitigate obstacle occlusion and path loss. However, these large-scale arrays generate pencil-shaped beams, which necessitate a higher number of training beams to cover the desired space. This results in the heavy beam training overhead. Furthermore, as the antenna aperture increases, users are more likely to be situated in the near-field region of the base station (BS) antenna array. This motivates our investigation into the beam training problem in the near-field region to achieve efficient beam alignment. To address the high complexity and low identification accuracy of existing beam training techniques, we propose an efficient hashing multi-arm beam (HMB) training scheme for the near-field scenario. Specifically, we first design a set of sparse bases based on the polar domain sparsity of the near-field channel and construct a near-field single-beam training codebook. Then, the hash functions are chosen to construct the near-field multi-arm beam training codebook. Each multi-arm beam training codeword is used in a time slot until the predefined codebook is traversed. Finally, the soft decision and voting methods are applied to distinguish the signal from different BS and obtain the correctly aligned beams. In addition, we provide the logically rigorous proof of computational complexity. Simulation results show that our proposed near-field HMB training method can achieve 96.4% identification accuracy of the exhaustive beam training method and greatly reduce the training overhead to the logarithmic level. Furthermore, we verify its applicability under the far-field scenario as well.
In this paper, the problem of low-latency communication and computation resource allocation for digital twin (DT) over wireless networks is investigated. In the considered model, multiple physical devices in the physical network (PN) needs to frequently offload the computation task related data to the digital network twin (DNT), which is generated and controlled by the central server. Due to limited energy budget of the physical devices, both computation accuracy and wireless transmission power must be considered during the DT procedure. This joint communication and computation problem is formulated as an optimization problem whose goal is to minimize the overall transmission delay of the system under total PN energy and DNT model accuracy constraints. To solve this problem, an alternating algorithm with iteratively solving device scheduling, power control, and data offloading subproblems. For the device scheduling subproblem, the optimal solution is obtained in closed form through the dual method. For the special case with one physical device, the optimal number of transmission times is reveled. Based on the theoretical findings, the original problem is transformed into a simplified problem and the optimal device scheduling can be found. Numerical results verify that the proposed algorithm can reduce the transmission delay of the system by up to 51.2\% compared to the conventional schemes.
This paper investigates the secure resource allocation for a downlink integrated sensing and communication system with multiple legal users and potential eavesdroppers. In the considered model, the base station (BS) simultaneously transmits sensing and communication signals through beamforming design, where the sensing signals can be viewed as artificial noise to enhance the security of communication signals. To further enhance the security in the semantic layer, the semantic information is extracted from the original information before transmission. The user side can only successfully recover the received information with the help of the knowledge base shared with the BS, which is stored in advance. Our aim is to maximize the sum semantic secrecy rate of all users while maintaining the minimum quality of service for each user and guaranteeing overall sensing performance. To solve this sum semantic secrecy rate maximization problem, an iterative algorithm is proposed using the alternating optimization method. The simulation results demonstrate the superiority of the proposed algorithm in terms of secure semantic communication and reliable detection.
This paper addresses the challenge of achieving information-theoretic security in semantic communication (SeCom) over a wiretap channel, where a legitimate receiver coexists with an eavesdropper experiencing a poorer channel condition. Despite previous efforts to secure SeCom against eavesdroppers, achieving information-theoretic security in such schemes remains an open issue. In this work, we propose a secure digital SeCom approach based on superposition codes, aiming to attain nearly information-theoretic security. Our proposed method involves associating semantic information with satellite constellation points within a double-layered constellation map, where cloud center constellation points are randomly selected. By carefully allocating power between these two layers of constellation, we ensure that the symbol error probability (SEP) of the eavesdropper decoding satellite constellation points is nearly equivalent to random guessing, while maintaining a low SEP for the legitimate receiver to successfully decode the semantic information. Simulation results showcase that the Peak Signal-to-Noise Ratio (PSNR) and Mean Squared Error (MSE) for the eavesdropper's reconstructed data, using our proposed method, can range from decoding Gaussian-distributed random noise to approaching the variance of the data. This validates the ability of our method to achieve nearly information-theoretic security, demonstrating superior data security compared to benchmark methods.
In multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) systems, representing the whole channel only based on partial subchannels will significantly reduce the channel acquisition overhead. For such a channel mapping task, inspired by the intrinsic coupling across the space and frequency domains, this letter proposes to use interleaved learning with partial antenna and subcarrier characteristics to represent the whole MIMO-OFDM channel. Specifically, we design a complex-domain multilayer perceptron (MLP)-Mixer (CMixer), which utilizes two kinds of complex-domain MLP modules to learn the space and frequency characteristics respectively and then interleaves them to couple the learned properties. The complex-domain computation facilitates the learning on the complex-valued channel data, while the interleaving tightens the coupling of space and frequency domains. These two designs jointly reduce the learning burden, making the physics-inspired CMixer more effective on channel representation learning than existing data-driven approaches. Simulation shows that the proposed scheme brings 4.6~10dB gains in mapping accuracy compared to existing schemes under different settings. Besides, ablation studies show the necessity of complex-domain computation as well as the extent to which the interleaved learning matches the channel properties.
Acquisition and processing of point clouds (PCs) is a crucial enabler for many emerging applications reliant on 3D spatial data, such as robot navigation, autonomous vehicles, and augmented reality. In most scenarios, PCs acquired by remote sensors must be transmitted to an edge server for fusion, segmentation, or inference. Wireless transmission of PCs not only puts on increased burden on the already congested wireless spectrum, but also confronts a unique set of challenges arising from the irregular and unstructured nature of PCs. In this paper, we meticulously delineate these challenges and offer a comprehensive examination of existing solutions while candidly acknowledging their inherent limitations. In response to these intricacies, we proffer four pragmatic solution frameworks, spanning advanced techniques, hybrid schemes, and distributed data aggregation approaches. In doing so, our goal is to chart a path toward efficient, reliable, and low-latency wireless PC transmission.
This work introduces a new Transformer model called Cached Transformer, which uses Gated Recurrent Cached (GRC) attention to extend the self-attention mechanism with a differentiable memory cache of tokens. GRC attention enables attending to both past and current tokens, increasing the receptive field of attention and allowing for exploring long-range dependencies. By utilizing a recurrent gating unit to continuously update the cache, our model achieves significant advancements in \textbf{six} language and vision tasks, including language modeling, machine translation, ListOPs, image classification, object detection, and instance segmentation. Furthermore, our approach surpasses previous memory-based techniques in tasks such as language modeling and displays the ability to be applied to a broader range of situations.