Wireless communication systems must increasingly support a multitude of machine-type communications (MTC) devices, thus calling for advanced strategies for active user detection (AUD). Recent literature has delved into AUD techniques based on compressed sensing, highlighting the critical role of signal sparsity. This study investigates the relationship between frequency diversity and signal sparsity in the AUD problem. Single-antenna users transmit multiple copies of non-orthogonal pilots across multiple frequency channels and the base station independently performs AUD in each channel using the orthogonal matching pursuit algorithm. We note that, although frequency diversity may improve the likelihood of successful reception of the signals, it may also damage the channel sparsity level, leading to important trade-offs. We show that a sparser signal significantly benefits AUD, surpassing the advantages brought by frequency diversity in scenarios with limited temporal resources and/or high numbers of receive antennas. Conversely, with longer pilots and fewer receive antennas, investing in frequency diversity becomes more impactful, resulting in a tenfold AUD performance improvement.
Contemporary wireless communication systems rely on Multi-User Multiple-Input Multiple-Output (MU-MIMO) techniques. In such systems, each Access Point (AP) is equipped with multiple antenna elements and serves multiple devices simultaneously. Notably, traditional systems utilize fixed antennas, i.e., antennas without any movement capabilities, while the idea of movable antennas has recently gained traction among the research community. By moving in a confined region, movable antennas are able to exploit the wireless channel variation in the continuous domain. This additional degree of freedom may enhance the quality of the wireless links, and consequently the communication performance. However, movable antennas for MU-MIMO proposed in the literature are complex, bulky, expensive and present a high power consumption. In this paper, we propose an alternative to such systems that has lower complexity and lower cost. More specifically, we propose the incorporation of rotation capabilities to APs equipped with Uniform Linear Arrays (ULAs) of antennas. We consider the uplink of an indoor scenario where the AP serves multiple devices simultaneously. The optimal rotation of the ULA is computed based on estimates of the positions of the active devices and aiming at maximizing the per-user mean achievable Spectral Efficiency (SE). Adopting a spatially correlated Rician channel model, our numerical results show that the rotation capabilities of the AP can bring substantial improvements in the SE in scenarios where the line-of-sight component of the channel vectors is strong. Moreover, our proposed system is robust against imperfect positioning estimates.
Distributed learning on edge devices has attracted increased attention with the advent of federated learning (FL). Notably, edge devices often have limited battery and heterogeneous energy availability, while multiple rounds are required in FL for convergence, intensifying the need for energy efficiency. Energy depletion may hinder the training process and the efficient utilization of the trained model. To solve these problems, this letter considers the integration of energy harvesting (EH) devices into a FL network with multi-channel ALOHA, while proposing a method to ensure both low energy outage probability and successful execution of future tasks. Numerical results demonstrate the effectiveness of this method, particularly in critical setups where the average energy income fails to cover the iteration cost. The method outperforms a norm based solution in terms of convergence time and battery level.
Industry and academia have been working towards the evolution from Centralized massive Multiple-Input Multiple-Output (CmMIMO) to Distributed mMIMO (DmMIMO) architectures. Instead of splitting a coverage area into many cells, each served by a single Base Station equipped with several antennas, the whole coverage area is jointly covered by several Access Points (AP) equipped with few or single antennas. Nevertheless, when choosing between deploying more APs with few or single antennas or fewer APs equipped with many antennas, one observes an inherent trade-off between the beamforming and macro-diversity gains that has not been investigated in the literature. Given a total number of antenna elements and total downlink power, under a channel model that takes into account a probability of Line-of-Sight (LoS) as a function of the distance between the User Equipments (UEs) and APs, our numerical results show that there exists a ``sweet spot" on the optimal number of APs and of antenna elements per AP which is a function of the physical dimensions of the coverage area.
We propose and evaluate the performance of a Non-Orthogonal Multiple Access (NOMA) dual-hop multiple relay (MR) network from an information freshness perspective using the Age of Information (AoI) metric. More specifically, we consider an age dependent (AD) policy, named as AD-NOMA- MR, in which users only transmit, with a given probability, after they reach a certain age threshold. The packets sent by the users are potentially received by the relays, and then forwarded to a common sink in a NOMA fashion by randomly selecting one of the available power levels, and multiple packets are received if all selected levels are unique. We derive analytical expressions for the average AoI of AD-NOMA-MR. Through numerical and simulation results, we show that the proposed policy can improve the average AoI up to 76.6% when compared to a previously proposed AD Orthogonal Multiple Access MR policy.
Massive Multiple-Input Multiple-Output (mMIMO) is one of the essential technologies introduced by the Fifth Generation (5G) of wireless communication systems. However, although mMIMO provides many benefits for wireless communications, it cannot ensure uniform wireless coverage and suffers from inter-cell interference inherent to the traditional cellular network paradigm. Therefore, industry and academia are working on the evolution from conventional Centralized mMIMO (CmMIMO) to Distributed mMIMO (DmMIMO) architectures for the Sixth Generation (6G) of wireless networks. Under this new paradigm, several Access Points (APs) are distributed in the coverage area, and all jointly cooperate to serve the active devices. Aiming at Machine-Type Communication (MTC) use cases, we compare the performance of CmMIMO and different DmMIMO deployments in an indoor industrial scenario considering regular and alarm traffic patterns for MTC. Our simulation results show that DmMIMO's performance is often superior to CmMIMO. However, the traditional CmMIMO can outperform DmMIMO when the devices' channels are highly correlated.
Prolonging the lifetime of massive machine-type communication (MTC) networks is key to realizing a sustainable digitized society. Great energy savings can be achieved by accurately predicting MTC traffic followed by properly designed resource allocation mechanisms. However, selecting the proper MTC traffic predictor is not straightforward and depends on accuracy/complexity trade-offs and the specific MTC applications and network characteristics. Remarkably, the related state-of-the-art literature still lacks such debates. Herein, we assess the performance of several machine learning (ML) methods to predict Poisson and quasi-periodic MTC traffic in terms of accuracy and computational cost. Results show that the temporal convolutional network (TCN) outperforms the long-short term memory (LSTM), the gated recurrent units (GRU), and the recurrent neural network (RNN), in that order. For Poisson traffic, the accuracy gap between the predictors is larger than under quasi-periodic traffic. Finally, we show that running a TCN predictor is around three times more costly than other methods, while the training/inference time is the greatest/least.
Reducing energy consumption is a pressing issue in low-power machine-type communication (MTC) networks. In this regard, the Wake-up Signal (WuS) technology, which aims to minimize the energy consumed by the radio interface of the machine-type devices (MTDs), stands as a promising solution. However, state-of-the-art WuS mechanisms use static operational parameters, so they cannot efficiently adapt to the system dynamics. To overcome this, we design a simple but efficient neural network to predict MTC traffic patterns and configure WuS accordingly. Our proposed forecasting WuS (FWuS) leverages an accurate long-short term memory (LSTM)- based traffic prediction that allows extending the sleep time of MTDs by avoiding frequent page monitoring occasions in idle state. Simulation results show the effectiveness of our approach. The traffic prediction errors are shown to be below 4%, being false alarm and miss-detection probabilities respectively below 8.8% and 1.3%. In terms of energy consumption reduction, FWuS can outperform the best benchmark mechanism in up to 32%. Finally, we certify the ability of FWuS to dynamically adapt to traffic density changes, promoting low-power MTC scalability
The Fifth Generation (5G) of wireless networks introduced native support for Machine-Type Communication (MTC), which is a key enabler for the Internet of Things (IoT) revolution. Current 5G standards are not yet capable of fully satisfying the requirements of critical MTC (cMTC) and massive MTC (mMTC) use cases. This is the main reason why industry and academia have already started working on technical solutions for beyond-5G and Sixth Generation (6G) networks. One technological solution that has been extensively studied is the combination of network densification, massive Multiple-Input Multiple-Output (mMIMO) systems and user-centric design, which is known as distributed mMIMO or Cell-Free (CF) mMIMO. Under this new paradigm, there are no longer cell boundaries: all the Access Points (APs) on the network cooperate to jointly serve all the devices. In this paper, we compare the performance of traditional mMIMO and different distributed mMIMO setups, and quantify the macro diversity and signal spatial diversity performance they provide. Aiming at the uplink in industrial indoor scenarios, we adopt a path loss model based on real measurement campaigns. Monte Carlo simulation results show that the grid deployment of APs provide higher average channel gains, but radio stripes deployments provide lower variability of the received signal strength.
The recent extra-large scale massive multiple-input multiple-output (XL-MIMO) systems are seen as a promising technology for providing very high data rates in increased user-density scenarios. Spatial non-stationarities and visibility regions (VRs) appear across the XL-MIMO array since its large dimension is of the same order as the distances to the user-equipments (UEs). Due to the increased density of UEs in typical applications of XL-MIMO systems and the scarcity of pilots, the design of random access (RA) protocols and scheduling algorithms become challenging. In this paper, we propose a joint RA and scheduling protocol, namely non-overlapping VR XL- MIMO (NOVR-XL) RA protocol, which takes advantage of the different VRs of the UEs for improving RA performance, besides seeking UEs with non-overlapping VRs to be scheduled in the same payload data pilot resource. Our results reveal that the proposed scheme achieves significant gains in terms of sum rate compared with traditional RA schemes, as well as reducing access latency and improving connectivity performance as a whole.