Abstract:Despite the advantages of multi-agent reinforcement learning (MARL) for wireless use case such as medium access control (MAC), their real-world deployment in Internet of Things (IoT) is hindered by their sample inefficiency. To alleviate this challenge, one can leverage model-based reinforcement learning (MBRL) solutions, however, conventional MBRL approaches rely on black-box models that are not interpretable and cannot reason. In contrast, in this paper, a novel causal model-based MARL framework is developed by leveraging tools from causal learn- ing. In particular, the proposed model can explicitly represent causal dependencies between network variables using structural causal models (SCMs) and attention-based inference networks. Interpretable causal models are then developed to capture how MAC control messages influence observations, how transmission actions determine outcomes, and how channel observations affect rewards. Data augmentation techniques are then used to generate synthetic rollouts using the learned causal model for policy optimization via proximal policy optimization (PPO). Analytical results demonstrate exponential sample complexity gains of causal MBRL over black-box approaches. Extensive simulations demonstrate that, on average, the proposed approach can reduce environment interactions by 58%, and yield faster convergence compared to model-free baselines. The proposed approach inherently is also shown to provide interpretable scheduling decisions via attention-based causal attribution, revealing which network conditions drive the policy. The resulting combination of sample efficiency and interpretability establishes causal MBRL as a practical approach for resource-constrained wireless systems.
Abstract:In this paper, a measurement-driven framework is proposed for early radio link failure (RLF) prediction in 5G non-standalone (NSA) railway environments. Using 10 Hz metro-train traces with serving and neighbor-cell indicators, we benchmark six models, namely CNN, LSTM, XGBoost, Anomaly Transformer, PatchTST, and TimesNet, under varied observation windows and prediction horizons. When the observation window is three seconds, TimesNet attains the highest F1 score with a three-second prediction horizon, while CNN provides a favorable accuracy-latency tradeoff with a two-second horizon, enabling proactive actions such as redundancy and adaptive handovers. The results indicate that deep temporal models can anticipate reliability degradations several seconds in advance using lightweight features available on commercial devices, offering a practical path to early-warning control in 5G-based railway systems.
Abstract:Traditional joint source-channel coding employs static learned semantic representations that cannot dynamically adapt to evolving source distributions. Shared semantic memories between transmitter and receiver can potentially enable bandwidth savings by reusing previously transmitted concepts as context to reconstruct data, but require effective mechanisms to determine when current content is similar enough to stored patterns. However, existing hard quantization approaches based on variational autoencoders are limited by frequent memory updates even under small changes in data dynamics, which leads to inefficient usage of bandwidth.To address this challenge, in this paper, a memory-augmented semantic communication framework is proposed where both transmitter and receiver maintain a shared memory of semantic concepts using modern Hopfield networks (MHNs). The proposed framework employs soft attention-based retrieval that smoothly adjusts stored semantic prototype weights as data evolves that enables stable matching decisions under gradual data dynamics. A joint optimization of encoder, decoder, and memory retrieval mechanism is performed with the objective of maximizing a reasoning capacity metric that quantifies semantic efficiency as the product of memory reuse rate and compression ratio. Theoretical analysis establishes the fundamental rate-distortion-reuse tradeoff and proves that soft retrieval reduces unnecessary transmissions compared to hard quantization under bounded semantic drift. Extensive simulations over diverse video scenarios demonstrate that the proposed MHN-based approach achieves substantial bit reductions around 14% on average and up to 70% in scenarios with gradual content changes compared to baseline.
Abstract:In 6G wireless networks, multi-modal ML models can be leveraged to enable situation-aware network decisions in dynamic environments. However, trained ML models often fail to generalize under domain shifts when training and test data distributions are different because they often focus on modality-specific spurious features. In practical wireless systems, domain shifts occur frequently due to dynamic channel statistics, moving obstacles, or hardware configuration. Thus, there is a need for learning frameworks that can achieve robust generalization under scarce multi-modal data in wireless networks. In this paper, a novel and data-efficient two-phase learning framework is proposed to improve generalization performance in unseen and unfamiliar wireless environments with minimal amount of multi-modal data. In the first stage, a physics-based loss function is employed to enable each BS to learn the physics underlying its wireless environment captured by multi-modal data. The data-efficiency of the physics-based loss function is analytically investigated. In the second stage, collaborative domain adaptation is proposed to leverage the wireless environment knowledge of multiple BSs to guide under-performing BSs under domain shift. Specifically, domain-similarity-aware model aggregation is proposed to utilize the knowledge of BSs that experienced similar domains. To validate the proposed framework, a new dataset generation framework is developed by integrating CARLA and MATLAB-based mmWave channel modeling to predict mmWave RSS. Simulation results show that the proposed physics-based training requires only 13% of data samples to achieve the same performance as a state-of-the-art baseline that does not use physics-based training. Moreover, the proposed collaborative domain adaptation needs only 25% of data samples and 20% of FLOPs to achieve the convergence compared to baselines.
Abstract:Accurate prediction of communication link quality metrics is essential for vehicle-to-infrastructure (V2I) systems, enabling smooth handovers, efficient beam management, and reliable low-latency communication. The increasing availability of sensor data from modern vehicles motivates the use of multimodal large language models (MLLMs) because of their adaptability across tasks and reasoning capabilities. However, MLLMs inherently lack three-dimensional spatial understanding. To overcome this limitation, a lightweight, plug-and-play bird's-eye view (BEV) injection connector is proposed. In this framework, a BEV of the environment is constructed by collecting sensing data from neighboring vehicles. This BEV representation is then fused with the ego vehicle's input to provide spatial context for the large language model. To support realistic multimodal learning, a co-simulation environment combining CARLA simulator and MATLAB-based ray tracing is developed to generate RGB, LiDAR, GPS, and wireless signal data across varied scenarios. Instructions and ground-truth responses are programmatically extracted from the ray-tracing outputs. Extensive experiments are conducted across three V2I link prediction tasks: line-of-sight (LoS) versus non-line-of-sight (NLoS) classification, link availability, and blockage prediction. Simulation results show that the proposed BEV injection framework consistently improved performance across all tasks. The results indicate that, compared to an ego-only baseline, the proposed approach improves the macro-average of the accuracy metrics by up to 13.9%. The results also show that this performance gain increases by up to 32.7% under challenging rainy and nighttime conditions, confirming the robustness of the framework in adverse settings.
Abstract:In this paper, the precoding design is investigated for maximizing the throughput of millimeter wave (mmWave) multiple-input multiple-output (MIMO) systems with obstructed direct communication paths. In particular, a reconfigurable intelligent surface (RIS) is employed to enhance MIMO transmissions, considering mmWave characteristics related to line-of-sight (LoS) and multipath effects. The traditional exhaustive search (ES) for optimal codewords in the continuous phase shift is computationally intensive and time-consuming. To reduce computational complexity, permuted discrete Fourier transform (DFT) vectors are used for finding codebook design, incorporating amplitude responses for practical or ideal RIS systems. However, even if the discrete phase shift is adopted in the ES, it results in significant computation and is time-consuming. Instead, the trained deep neural network (DNN) is developed to facilitate faster codeword selection. Simulation results show that the DNN maintains sub-optimal spectral efficiency even as the distance between the end-user and the RIS has variations in the testing phase. These results highlight the potential of DNN in advancing RIS-aided systems.
Abstract:Recent studies have shown that vector representations of contextual embeddings learned by pre-trained large language models (LLMs) are effective in various downstream tasks in numerical domains. Despite their significant benefits, the tendency of LLMs to hallucinate in such domains can have severe consequences in applications such as energy, nature, finance, healthcare, retail and transportation, among others. To guarantee prediction reliability and accuracy in numerical domains, it is necessary to open the black-box and provide performance guarantees through explanation. However, there is little theoretical understanding of when pre-trained language models help solve numeric downstream tasks. This paper seeks to bridge this gap by understanding when the next-word prediction capability of LLMs can be adapted to numerical domains through a novel analysis based on the concept of isotropy in the contextual embedding space. Specifically, we consider a log-linear model for LLMs in which numeric data can be predicted from its context through a network with softmax in the output layer of LLMs (i.e., language model head in self-attention). We demonstrate that, in order to achieve state-of-the-art performance in numerical domains, the hidden representations of the LLM embeddings must possess a structure that accounts for the shift-invariance of the softmax function. By formulating a gradient structure of self-attention in pre-trained models, we show how the isotropic property of LLM embeddings in contextual embedding space preserves the underlying structure of representations, thereby resolving the shift-invariance problem and providing a performance guarantee. Experiments show that different characteristics of numeric data and model architecture could have different impacts on isotropy.
Abstract:Cellular vehicle-to-everything (C-V2X) networks provide a promising solution to improve road safety and traffic efficiency. One key challenge in such systems lies in meeting quality-of-service (QoS) requirements of vehicular communication links given limited network resources, particularly under imperfect channel state information (CSI) conditions caused by the highly dynamic environment. In this paper, a novel two-phase framework is proposed to instill resilience into C-V2X networks under unknown imperfect CSI. The resilience of the C-V2X network is defined, quantified, and optimized the first time through two principal dimensions: absorption phase and adaptation phase. Specifically, the probability distribution function (PDF) of the imperfect CSI is estimated during the absorption phase through dedicated absorption power scheme and resource block (RB) assignment. The estimated PDF is further used to analyze the interplay and reveal the tradeoff between these two phases. Then, a novel metric named hazard rate (HR) is exploited to balance the C-V2X network's prioritization on absorption and adaptation. Finally, the estimated PDF is exploited in the adaptation phase to recover the network's QoS through a real-time power allocation optimization. Simulation results demonstrate the superior capability of the proposed framework in sustaining the QoS of the C-V2X network under imperfect CSI. Specifically, in the adaptation phase, the proposed design reduces the vehicle-tovehicle (V2V) delay that exceeds QoS requirement by 35% and 56%, and improves the average vehicle-to-infrastructure (V2I) throughput by 14% and 16% compared to the model-based and data-driven benchmarks, respectively, without compromising the network's QoS in the absorption phase.




Abstract:Traditional reinforcement learning (RL)-based learning approaches for wireless networks rely on expensive trial-and-error mechanisms and real-time feedback based on extensive environment interactions, which leads to low data efficiency and short-sighted policies. These limitations become particularly problematic in complex, dynamic networks with high uncertainty and long-term planning requirements. To address these limitations, in this paper, a novel world model-based learning framework is proposed to minimize packet-completeness-aware age of information (CAoI) in a vehicular network. Particularly, a challenging representative scenario is considered pertaining to a millimeter-wave (mmWave) vehicle-to-everything (V2X) communication network, which is characterized by high mobility, frequent signal blockages, and extremely short coherence time. Then, a world model framework is proposed to jointly learn a dynamic model of the mmWave V2X environment and use it to imagine trajectories for learning how to perform link scheduling. In particular, the long-term policy is learned in differentiable imagined trajectories instead of environment interactions. Moreover, owing to its imagination abilities, the world model can jointly predict time-varying wireless data and optimize link scheduling in real-world wireless and V2X networks. Thus, during intervals without actual observations, the world model remains capable of making efficient decisions. Extensive experiments are performed on a realistic simulator based on Sionna that integrates physics-based end-to-end channel modeling, ray-tracing, and scene geometries with material properties. Simulation results show that the proposed world model achieves a significant improvement in data efficiency, and achieves 26% improvement and 16% improvement in CAoI, respectively, compared to the model-based RL (MBRL) method and the model-free RL (MFRL) method.




Abstract:Integrated sensing and communication (ISAC) emerged as a key feature of next-generation 6G wireless systems, allowing them to achieve high data rates and sensing accuracy. While prior research has primarily focused on addressing communication safety in ISAC systems, the equally critical issue of sensing safety remains largely ignored. In this paper, a novel threat to the sensing safety of ISAC vehicle networks is studied, whereby a malicious reconfigurable intelligent surface (RIS) is deployed to compromise the sensing functionality of a roadside unit (RSU). Specifically, a malicious attacker dynamically adjusts the phase shifts of an RIS to spoof the sensing outcomes of a vehicular user (VU)'s echo delay, Doppler shift, and angle-of-departure (AoD). To achieve spoofing on Doppler shift estimation, a time-varying phase shift design on the RIS is proposed. Furthermore, the feasible spoofing frequency set with respect to the Doppler shift is analytical derived. Analytical results also demonstrate that the maximum likelihood estimator (MLE) of the AoD can be significantly misled under spoofed Doppler shift estimation. Simulation results validate our theoretical findings, showing that the RIS can induce a spoofed velocity estimation from 0.1 m/s to 14.9 m/s for a VU with velocity of 10 m/s, and can cause an AoD estimation error of up to 65^{\circ} with only a 5^{\circ} beam misalignment.