Within the context of massive machine-type communications+, reconfigurable intelligent surfaces (RISs) represent a promising technology to boost system performance in scenarios with poor channel conditions. Considering single-antenna sensors transmitting short data packets to a multiple-antenna collector node, we introduce and design an RIS to maximize the weighted sum rate (WSR) of the system working in the finite blocklength regime. Due to the large number of reflecting elements and their passive nature, channel estimation errors may occur. In this letter, we then propose a robust RIS optimization to combat such a detrimental issue. Based on concave bounds and approximations, the nonconvex WSR problem for the RIS response is addressed via successive convex optimization (SCO). Numerical experiments validate the performance and complexity of the SCO solutions.
The increasing complexity of modern applications demands wireless networks capable of real time adaptability and efficient resource management. The Open Radio Access Network (O-RAN) architecture, with its RAN Intelligent Controller (RIC) modules, has emerged as a pivotal solution for dynamic resource management and network slicing. While artificial intelligence (AI) driven methods have shown promise, most approaches struggle to maintain performance under unpredictable and highly dynamic conditions. This paper proposes an adaptive Meta Hierarchical Reinforcement Learning (Meta-HRL) framework, inspired by Model Agnostic Meta Learning (MAML), to jointly optimize resource allocation and network slicing in O-RAN. The framework integrates hierarchical control with meta learning to enable both global and local adaptation: the high-level controller allocates resources across slices, while low level agents perform intra slice scheduling. The adaptive meta-update mechanism weights tasks by temporal difference error variance, improving stability and prioritizing complex network scenarios. Theoretical analysis establishes sublinear convergence and regret guarantees for the two-level learning process. Simulation results demonstrate a 19.8% improvement in network management efficiency compared with baseline RL and meta-RL approaches, along with faster adaptation and higher QoS satisfaction across eMBB, URLLC, and mMTC slices. Additional ablation and scalability studies confirm the method's robustness, achieving up to 40% faster adaptation and consistent fairness, latency, and throughput performance as network scale increases.
The coexistence of heterogeneous service classes in 5G Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communication (URLLC), and Massive Machine-Type Communication (mMTC) poses major challenges for meeting diverse Quality-of-Service (QoS) requirements under limited spectrum and power resources. Existing radio access network (RAN) slicing schemes typically optimise isolated layers or objectives, lacking physical-layer realism, slot-level adaptability, and interpretable per-slice performance metrics. This paper presents a joint optimisation framework that integrates Dynamic Hybrid Resource Utilisation with MCS-Based Intelligent Layering, formulated as a mixed-integer linear program (MILP) that jointly allocates bandwidth, power, and modulation and coding scheme (MCS) indices per slice. The model incorporates finite blocklength effects, channel misreporting, and correlated fading to ensure realistic operation. Two modes are implemented: a Baseline Mode that ensures resource-efficient QoS feasibility, and an Ideal-Chaser Mode that minimises deviation from ideal per-slice rates. Simulation results show that the proposed approach achieves energy efficiencies above $10^7$~kb/J in Baseline Mode and sub-millisecond latency with near-ideal throughput in Ideal-Chaser Mode, outperforming recent optimisation and learning-based methods in delay, fairness, and reliability. The framework provides a unified, interpretable, and computationally tractable solution for dynamic cross-layer resource management in 5G and beyond networks.
With the arrival of 6G, the Internet of Things (IoT) traffic is becoming more and more complex and diverse. To meet the diverse service requirements of IoT devices, massive machine-type communications (mMTC) becomes a typical scenario, and more recently, grant-free random access (GF-RA) presents a promising direction due to its low signaling overhead. However, existing GF-RA research primarily focuses on improving the accuracy of user detection and data recovery, without considering the heterogeneity of traffic. In this paper, we investigate a non-orthogonal GF-RA scenario where two distinct types of traffic coexist: event-triggered traffic with alarm devices (ADs), and status update traffic with monitor devices (MDs). The goal is to simultaneously achieve high detection success rates for ADs and high information timeliness for MDs. First, we analyze the age-based random access scheme and optimize the access parameters to minimize the average age of information (AoI) of MDs. Then, we design an age-based prior information aided autoencoder (A-PIAAE) to jointly detect active devices, together with learned pilots used in GF-RA to reduce interference between non-orthogonal pilots. In the decoder, an Age-based Learned Iterative Shrinkage Thresholding Algorithm (LISTA-AGE) utilizing the AoI of MDs as the prior information is proposed to enhance active user detection. Theoretical analysis is provided to demonstrate the proposed A-PIAAE has better convergence performance. Experiments demonstrate the advantage of the proposed method in reducing the average AoI of MDs and improving the successful detection rate of ADs.
This paper investigates the characteristics of energy detection (ED) over composite $\kappa$-$\mu$ shadowed fading channels in ultra machine-type communication (mMTC) networks. We have derived the closed-form expressions of the probability density function (PDF) of signal-to-noise ratio (SNR) based on the Inverse Gaussian (\emph{IG}) distribution. By adopting novel integration and mathematical transformation techniques, we derive a truncation-based closed-form expression for the average detection probability for the first time. It can be observed from our simulations that the number of propagation paths has a more pronounced effect on average detection probability compared to average SNR, which is in contrast to earlier studies that focus on device-to-device networks. It suggests that for 6G mMTC network design, we should consider enhancing transmitter-receiver placement and antenna alignment strategies, rather than relying solely on increasing the device-to-device average SNR.




Wireless underground sensor networks (WUSNs), which enable real-time sensing and monitoring of underground resources by underground devices (UDs), hold great promise for delivering substantial social and economic benefits across various verticals. However, due to the harsh subterranean environment, scarce network resources, and restricted communication coverage, WUSNs face significant challenges in supporting sustainable massive machine-type communications (mMTC), particularly in remote, disaster-stricken, and hard-to-reach areas. To complement this, we conceptualize in this study a novel space-air-ground-underground integrated network (SAGUIN) architecture that seamlessly incorporates satellite systems, aerial platforms, terrestrial networks, and underground communications. On this basis, we integrate LoRaWAN and wireless energy transfer (WET) technologies into SAGUIN to enable sustainable subterranean mMTC. We begin by reviewing the relevant technical background and presenting the architecture and implementation challenges of SAGUIN. Then, we employ simulations to model a remote underground pipeline monitoring scenario to evaluate the feasibility and performance of SAGUIN based on LoRaWAN and WET technologies, focusing on the effects of parameters such as underground conditions, time allocation, LoRaWAN spread factor (SF) configurations, reporting periods, and harvested energy levels. Our results evidence that the proposed SAGUIN system, when combined with the derived time allocation strategy and an appropriate SF, can effectively extend the operational lifetime of UDs, thereby facilitating sustainable subterranean mMTC. Finally, we pinpoint key challenges and future research directions for SAGUIN.
This paper proposes a grant-free coded random access (CRA) scheme for uplink massive machine-type communications (mMTC), based on Zak-orthogonal time frequency space (Zak-OTFS) modulation in the delay-Doppler domain. The scheme is tailored for doubly selective wireless channels, where conventional orthogonal frequency-division multiplexing (OFDM)-based CRA suffers from unreliable inter-slot channel prediction due to time-frequency variability. By exploiting the predictable nature of Zak-OTFS, the proposed approach enables accurate channel estimation across slots, facilitating reliable successive interference cancellation across user packet replicas. A fair comparison with an OFDM-based CRA baseline shows that the proposed scheme achieves significantly lower packet loss rates under high mobility and user density. Extensive simulations over the standardized Veh-A channel confirm the robustness and scalability of Zak-OTFS-based CRA, supporting its applicability to future mMTC deployments.
This paper presents a novel approach to resource allocation in Open Radio Access Networks (O-RAN), leveraging a Generative AI technique with network slicing to address the diverse demands of 5G and 6G service types such as Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low-Latency Communications (URLLC), and Massive Machine-Type Communications (mMTC). Additionally, we provide a comprehensive analysis and comparison of machine learning (ML) techniques for resource allocation within O-RAN, evaluating their effectiveness in optimizing network performance. We introduce a diffusion-based reinforcement learning (Diffusion-RL) algorithm designed to optimize the allocation of physical resource blocks (PRBs) and power consumption, thereby maximizing weighted throughput and minimizing the delay for user equipment (UE). The Diffusion-RL model incorporates controlled noise and perturbations to explore optimal resource distribution while meeting each service type's Quality of Service (QoS) requirements. We evaluate the performance of our proposed method against several benchmarks, including an exhaustive search algorithm, deep Q-networks (DQN), and the Semi-Supervised Variational Autoencoder (SS-VAE). Comprehensive metrics, such as throughput and latency, are presented for each service type. Experimental results demonstrate that the Diffusion-based RL approach outperforms existing methods in efficiency, scalability, and robustness, offering a promising solution for resource allocation in dynamic and heterogeneous O-RAN environments with significant implications for future 6G networks.
Massive machine-type communications (mMTC) are fundamental to the Internet of Things (IoT) framework in future wireless networks, involving the connection of a vast number of devices with sporadic transmission patterns. Traditional device activity detection (AD) methods are typically developed for Gaussian noise, but their performance may deteriorate when these conditions are not met, particularly in the presence of heavy-tailed impulsive noise. In this paper, we propose robust statistical techniques for AD that do not rely on the Gaussian assumption and replace the Gaussian loss function with robust loss functions that can effectively mitigate the impact of heavy-tailed noise and outliers. First, we prove that the coordinate-wise (conditional) objective function is geodesically convex and derive a fixed-point (FP) algorithm for minimizing it, along with convergence guarantees. Building on the FP algorithm, we propose two robust algorithms for solving the full (unconditional) objective function: a coordinate-wise optimization algorithm (RCWO) and a greedy covariance learning-based matching pursuit algorithm (RCL-MP). Numerical experiments demonstrate that the proposed methods significantly outperform existing algorithms in scenarios with non-Gaussian noise, achieving higher detection accuracy and robustness.
The use of cellular networks for massive machine-type communications (mMTC) is an appealing solution due to the availability of the existing infrastructure. However, the massive number of user equipments (UEs) poses a significant challenge to the cellular network's random access channel (RACH) regarding congestion and overloading. To mitigate this problem, we first present a novel approach to model a two-priority RACH, which allows us to define access patterns that describe the random access behavior of UEs as observed by the base station (BS). A non-uniform preamble selection scheme is proposed, offering increased flexibility in resource allocation for different UE priority classes. Then, we formulate an allocation model that finds the optimal access probabilities to maximize the success rate of high-priority UEs while constraining low-priority UEs. Finally, we develop a reinforcement learning approach to solving the optimization problem using multi-armed bandits, which provides a near-optimal but scalable solution and does not require the BS to know the number of UEs in the network.