With the arrival of 6G, the Internet of Things (IoT) traffic is becoming more and more complex and diverse. To meet the diverse service requirements of IoT devices, massive machine-type communications (mMTC) becomes a typical scenario, and more recently, grant-free random access (GF-RA) presents a promising direction due to its low signaling overhead. However, existing GF-RA research primarily focuses on improving the accuracy of user detection and data recovery, without considering the heterogeneity of traffic. In this paper, we investigate a non-orthogonal GF-RA scenario where two distinct types of traffic coexist: event-triggered traffic with alarm devices (ADs), and status update traffic with monitor devices (MDs). The goal is to simultaneously achieve high detection success rates for ADs and high information timeliness for MDs. First, we analyze the age-based random access scheme and optimize the access parameters to minimize the average age of information (AoI) of MDs. Then, we design an age-based prior information aided autoencoder (A-PIAAE) to jointly detect active devices, together with learned pilots used in GF-RA to reduce interference between non-orthogonal pilots. In the decoder, an Age-based Learned Iterative Shrinkage Thresholding Algorithm (LISTA-AGE) utilizing the AoI of MDs as the prior information is proposed to enhance active user detection. Theoretical analysis is provided to demonstrate the proposed A-PIAAE has better convergence performance. Experiments demonstrate the advantage of the proposed method in reducing the average AoI of MDs and improving the successful detection rate of ADs.
This paper investigates the characteristics of energy detection (ED) over composite $\kappa$-$\mu$ shadowed fading channels in ultra machine-type communication (mMTC) networks. We have derived the closed-form expressions of the probability density function (PDF) of signal-to-noise ratio (SNR) based on the Inverse Gaussian (\emph{IG}) distribution. By adopting novel integration and mathematical transformation techniques, we derive a truncation-based closed-form expression for the average detection probability for the first time. It can be observed from our simulations that the number of propagation paths has a more pronounced effect on average detection probability compared to average SNR, which is in contrast to earlier studies that focus on device-to-device networks. It suggests that for 6G mMTC network design, we should consider enhancing transmitter-receiver placement and antenna alignment strategies, rather than relying solely on increasing the device-to-device average SNR.




Wireless underground sensor networks (WUSNs), which enable real-time sensing and monitoring of underground resources by underground devices (UDs), hold great promise for delivering substantial social and economic benefits across various verticals. However, due to the harsh subterranean environment, scarce network resources, and restricted communication coverage, WUSNs face significant challenges in supporting sustainable massive machine-type communications (mMTC), particularly in remote, disaster-stricken, and hard-to-reach areas. To complement this, we conceptualize in this study a novel space-air-ground-underground integrated network (SAGUIN) architecture that seamlessly incorporates satellite systems, aerial platforms, terrestrial networks, and underground communications. On this basis, we integrate LoRaWAN and wireless energy transfer (WET) technologies into SAGUIN to enable sustainable subterranean mMTC. We begin by reviewing the relevant technical background and presenting the architecture and implementation challenges of SAGUIN. Then, we employ simulations to model a remote underground pipeline monitoring scenario to evaluate the feasibility and performance of SAGUIN based on LoRaWAN and WET technologies, focusing on the effects of parameters such as underground conditions, time allocation, LoRaWAN spread factor (SF) configurations, reporting periods, and harvested energy levels. Our results evidence that the proposed SAGUIN system, when combined with the derived time allocation strategy and an appropriate SF, can effectively extend the operational lifetime of UDs, thereby facilitating sustainable subterranean mMTC. Finally, we pinpoint key challenges and future research directions for SAGUIN.
This paper proposes a grant-free coded random access (CRA) scheme for uplink massive machine-type communications (mMTC), based on Zak-orthogonal time frequency space (Zak-OTFS) modulation in the delay-Doppler domain. The scheme is tailored for doubly selective wireless channels, where conventional orthogonal frequency-division multiplexing (OFDM)-based CRA suffers from unreliable inter-slot channel prediction due to time-frequency variability. By exploiting the predictable nature of Zak-OTFS, the proposed approach enables accurate channel estimation across slots, facilitating reliable successive interference cancellation across user packet replicas. A fair comparison with an OFDM-based CRA baseline shows that the proposed scheme achieves significantly lower packet loss rates under high mobility and user density. Extensive simulations over the standardized Veh-A channel confirm the robustness and scalability of Zak-OTFS-based CRA, supporting its applicability to future mMTC deployments.
This paper presents a novel approach to resource allocation in Open Radio Access Networks (O-RAN), leveraging a Generative AI technique with network slicing to address the diverse demands of 5G and 6G service types such as Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low-Latency Communications (URLLC), and Massive Machine-Type Communications (mMTC). Additionally, we provide a comprehensive analysis and comparison of machine learning (ML) techniques for resource allocation within O-RAN, evaluating their effectiveness in optimizing network performance. We introduce a diffusion-based reinforcement learning (Diffusion-RL) algorithm designed to optimize the allocation of physical resource blocks (PRBs) and power consumption, thereby maximizing weighted throughput and minimizing the delay for user equipment (UE). The Diffusion-RL model incorporates controlled noise and perturbations to explore optimal resource distribution while meeting each service type's Quality of Service (QoS) requirements. We evaluate the performance of our proposed method against several benchmarks, including an exhaustive search algorithm, deep Q-networks (DQN), and the Semi-Supervised Variational Autoencoder (SS-VAE). Comprehensive metrics, such as throughput and latency, are presented for each service type. Experimental results demonstrate that the Diffusion-based RL approach outperforms existing methods in efficiency, scalability, and robustness, offering a promising solution for resource allocation in dynamic and heterogeneous O-RAN environments with significant implications for future 6G networks.
Massive machine-type communications (mMTC) are fundamental to the Internet of Things (IoT) framework in future wireless networks, involving the connection of a vast number of devices with sporadic transmission patterns. Traditional device activity detection (AD) methods are typically developed for Gaussian noise, but their performance may deteriorate when these conditions are not met, particularly in the presence of heavy-tailed impulsive noise. In this paper, we propose robust statistical techniques for AD that do not rely on the Gaussian assumption and replace the Gaussian loss function with robust loss functions that can effectively mitigate the impact of heavy-tailed noise and outliers. First, we prove that the coordinate-wise (conditional) objective function is geodesically convex and derive a fixed-point (FP) algorithm for minimizing it, along with convergence guarantees. Building on the FP algorithm, we propose two robust algorithms for solving the full (unconditional) objective function: a coordinate-wise optimization algorithm (RCWO) and a greedy covariance learning-based matching pursuit algorithm (RCL-MP). Numerical experiments demonstrate that the proposed methods significantly outperform existing algorithms in scenarios with non-Gaussian noise, achieving higher detection accuracy and robustness.
The use of cellular networks for massive machine-type communications (mMTC) is an appealing solution due to the availability of the existing infrastructure. However, the massive number of user equipments (UEs) poses a significant challenge to the cellular network's random access channel (RACH) regarding congestion and overloading. To mitigate this problem, we first present a novel approach to model a two-priority RACH, which allows us to define access patterns that describe the random access behavior of UEs as observed by the base station (BS). A non-uniform preamble selection scheme is proposed, offering increased flexibility in resource allocation for different UE priority classes. Then, we formulate an allocation model that finds the optimal access probabilities to maximize the success rate of high-priority UEs while constraining low-priority UEs. Finally, we develop a reinforcement learning approach to solving the optimization problem using multi-armed bandits, which provides a near-optimal but scalable solution and does not require the BS to know the number of UEs in the network.
Provisioning secrecy for all users, given the heterogeneity in their channel conditions, locations, and the unknown location of the attacker/eavesdropper, is challenging and not always feasible. The problem is even more difficult under finite blocklength constraints that are popular in ultra-reliable low-latency communication (URLLC) and massive machine-type communications (mMTC). This work takes the first step to guarantee secrecy for all URLLC/mMTC users in the finite blocklength regime (FBR) where intelligent reflecting surfaces (IRS) are used to enhance legitimate users' reception and thwart the potential eavesdropper (Eve) from intercepting. To that end, we aim to maximize the minimum secrecy rate (SR) among all users by jointly optimizing the transmitter's beamforming and IRS's passive reflective elements (PREs) under the FBR latency constraints. The resulting optimization problem is non-convex and even more complicated under imperfect channel state information (CSI). To tackle it, we linearize the objective function, and decompose the problem into sequential subproblems. When perfect CSI is not available, we use the successive convex approximation (SCA) approach to transform imperfect CSI-related semi-infinite constraints into finite linear matrix inequalities (LMI).




Fast and accurate device activity detection is the critical challenge in grant-free access for supporting massive machine-type communications (mMTC) and ultra-reliable low-latency communications (URLLC) in 5G and beyond. The state-of-the-art methods have unsatisfactory error rates or computation times. To address these outstanding issues, we propose new maximum likelihood estimation (MLE) and maximum a posterior estimation (MAPE) based device activity detection methods for known and unknown pathloss that achieve superior error rate and computation time tradeoffs using optimization and deep learning techniques. Specifically, we investigate four non-convex optimization problems for MLE and MAPE in the two pathloss cases, with one MAPE problem being formulated for the first time. For each non-convex problem, we develop an innovative parallel iterative algorithm using the parallel successive convex approximation (PSCA) method. Each PSCA-based algorithm allows parallel computations, uses up to the objective function's second-order information, converges to the problem's stationary points, and has a low per-iteration computational complexity compared to the state-of-the-art algorithms. Then, for each PSCA-based iterative algorithm, we present a deep unrolling neural network implementation, called PSCA-Net, to further reduce the computation time. Each PSCA-Net elegantly marries the underlying PSCA-based algorithm's parallel computation mechanism with the parallelizable neural network architecture and effectively optimizes its step sizes based on vast data samples to speed up the convergence. Numerical results demonstrate that the proposed methods can significantly reduce the error rate and computation time compared to the state-of-the-art methods, revealing their significant values for grant-free access.
Grant-free random access (GF-RA) is a promising access technique for massive machine-type communications (mMTC) in future wireless networks, particularly in the context of 5G and beyond (6G) systems. Within the context of GF-RA, this study investigates the efficiency of employing supervised machine learning techniques to tackle the challenges on the device activity detection (AD). GF-RA addresses scalability by employing non-orthogonal pilot sequences, which provides an efficient alternative comparing to conventional grant-based random access (GB-RA) technique that are constrained by the scarcity of orthogonal preamble resources. In this paper, we propose a novel lightweight data-driven algorithmic framework specifically designed for activity detection in GF-RA for mMTC in cell-free massive multiple-input multiple-output (CF-mMIMO) networks. We propose two distinct framework deployment strategies, centralized and decentralized, both tailored to streamline the proposed approach implementation across network infrastructures. Moreover, we introduce optimized post-detection methodologies complemented by a clustering stage to enhance overall detection performances. Our 3GPP-compliant simulations have validated that the proposed algorithm achieves state-of-the-art model-based activity detection accuracy while significantly reducing complexity. Achieving 99% accuracy, it demonstrates real-world viability and effectiveness.