The inherent limitations in scaling up ground infrastructure for future wireless networks, combined with decreasing operational costs of aerial and space networks, are driving considerable research interest in multisegment ground-air-space (GAS) networks. In GAS networks, where ground and aerial users share network resources, ubiquitous and accurate user localization becomes indispensable, not only as an end-user service but also as an enabler for location-aware communications. This breaks the convention of having localization as a byproduct in networks primarily designed for communications. To address these imperative localization needs, the design and utilization of ground, aerial, and space anchors require thorough investigation. In this tutorial, we provide an in-depth systemic analysis of the radio localization problem in GAS networks, considering ground and aerial users as targets to be localized. Starting from a survey of the most relevant works, we then define the key characteristics of anchors and targets in GAS networks. Subsequently, we detail localization fundamentals in GAS networks, considering 3D positions and orientations. Afterward, we thoroughly analyze radio localization systems in GAS networks, detailing the system model, design aspects, and considerations for each of the three GAS anchors. Preliminary results are presented to provide a quantifiable perspective on key design aspects in GAS-based localization scenarios. We then identify the vital roles 6G enablers are expected to play in radio localization in GAS networks.
Radio environment maps (REMs) hold a central role in optimizing wireless network deployment, enhancing network performance, and ensuring effective spectrum management. Conventional REM prediction methods are either excessively time-consuming, e.g., ray tracing, or inaccurate, e.g., statistical models, limiting their adoption in modern inherently dynamic wireless networks. Deep-learning-based REM prediction has recently attracted considerable attention as an appealing, accurate, and time-efficient alternative. However, existing works on REM prediction using deep learning are either confined to 2D maps or use a limited dataset. In this paper, we introduce a runtime-efficient REM prediction framework based on u-nets, trained on a large-scale 3D maps dataset. In addition, data preprocessing steps are investigated to further refine the REM prediction accuracy. The proposed u-net framework, along with preprocessing steps, are evaluated in the context of the 2023 IEEE ICASSP Signal Processing Grand Challenge, namely, the First Pathloss Radio Map Prediction Challenge. The evaluation results demonstrate that the proposed method achieves an average normalized root-mean-square error (RMSE) of 0.045 with an average of 14 milliseconds (ms) runtime. Finally, we position our achieved REM prediction accuracy in the context of a relevant cell-free massive multiple-input multiple-output (CF-mMIMO) use case. We demonstrate that one can obviate consuming energy on large-scale fading measurements and rely on predicted REM instead to decide on which sleep access points (APs) to switch on in a CF-mMIMO network that adopts a minimum propagation loss AP switch ON/OFF strategy.
Accurate network synchronization is a key enabler for services such as coherent transmission, cooperative decoding, and localization in distributed and cell-free networks. Unlike centralized networks, where synchronization is generally needed between a user and a base station, synchronization in distributed networks needs to be maintained between several cooperative devices, which is an inherently challenging task due to hardware imperfections and environmental influences on the clock, such as temperature. As a result, distributed networks have to be frequently synchronized, introducing a significant synchronization overhead. In this paper, we propose an online-LSTM-based model for clock skew and drift compensation, to elongate the period at which synchronization signals are needed, decreasing the synchronization overhead. We conducted comprehensive experimental results to assess the performance of the proposed model. Our measurement-based results show that the proposed model reduces the need for re-synchronization between devices by an order of magnitude, keeping devices synchronized with a precision of at least 10 microseconds with a probability 90%.
Air traffic management (ATM) of manned and unmanned aerial vehicles (AVs) relies critically on ubiquitous location tracking. While technologies exist for AVs to broadcast their location periodically and for airports to track and detect AVs, methods to verify the broadcast locations and complement the ATM coverage are urgently needed, addressing anti-spoofing and safe coexistence concerns. In this work, we propose an ATM solution by exploiting noncoherent crowdsourced wireless networks (CWNs) and correcting the inherent clock-synchronization problems present in such non-coordinated sensor networks. While CWNs can provide a great number of measurements for ubiquitous ATM, these are normally obtained from unsynchronized sensors. This article first presents an analysis of the effects of lack of clock synchronization in ATM with CWN and provides solutions based on the presence of few trustworthy sensors in a large non-coordinated network. Secondly, autoregressive-based and long short-term memory (LSTM)-based approaches are investigated to achieve the time synchronization needed for localization of the AVs. Finally, a combination of a multilateration (MLAT) method and a Kalman filter is employed to provide an anti-spoofing tracking solution for AVs. We demonstrate the performance advantages of our framework through a dataset collected by a real-world CWN. Our results show that the proposed framework achieves localization accuracy comparable to that acquired using only GPS-synchronized sensors and outperforms the localization accuracy obtained based on state-of-the-art CWN synchronization methods.
Non-terrestrial networks (NTNs) traditionally had certain limited applications. However, the recent technological advancements opened up myriad applications of NTNs for 5G and beyond networks, especially when integrated into terrestrial networks (TNs). This article comprehensively surveys the evolution of NTNs highlighting its relevance to 5G networks and essentially, how it will play a pivotal role in the development of 6G and beyond wireless networks. The survey discusses important features of NTNs integration into TNs by delving into the new range of services and use cases, various architectures, and new approaches being adopted to develop a new wireless ecosystem. Our survey includes the major progresses and outcomes from academic research as well as industrial efforts. We first start with introducing the relevant 5G use cases and general integration challenges such as handover and deployment difficulties. Then, we review the NTNs operations in mmWave and their potential for the internet of things (IoT). Further, we discuss the significance of mobile edge computing (MEC) and machine learning (ML) in NTNs by reviewing the relevant research works. Furthermore, we also discuss the corresponding higher layer advancements and relevant field trials/prototyping at both academic and industrial levels. Finally, we identify and review 6G and beyond application scenarios, novel architectures, technological enablers, and higher layer aspects pertinent to NTNs integration.