Abstract:Space-air-ground-integrated network (SAGIN)-enabled multiconnectivity (MC) is emerging as a key enabler for next-generation networks, enabling users to simultaneously utilize multiple links across multi-layer non-terrestrial networks (NTN) and multi-radio access technology (multi-RAT) terrestrial networks (TN). However, the heterogeneity of TN and NTN introduces complex architectural challenges that complicate MC implementation. Specifically, the diversity of link types, spanning air-to-air, air-to-space, space-to-space, space-to-ground, and ground-to-ground communications, renders optimal resource allocation highly complex. Recent advancements in reinforcement learning (RL) and agentic artificial intelligence (AI) have shown remarkable effectiveness in optimal decision-making in complex and dynamic environments. In this paper, we review the current developments in SAGIN-enabled MC and outline the key challenges associated with its implementation. We further highlight the transformative potential of AI-driven approaches for resource optimization in a heterogeneous SAGIN environment. To this end, we present a case study on resource allocation optimization enabled by agentic RL for SAGIN-enabled MC involving diverse radio access technologies (RATs). Results show that learning-based methods can effectively handle complex scenarios and substantially enhance network performance in terms of latency and capacity while incurring a moderate increase in power consumption as an acceptable tradeoff. Finally, open research problems and future directions are presented to realize efficient SAGIN-enabled MC.
Abstract:While machine learning is widely used to optimize wireless networks, training a separate model for each task in communication and localization is becoming increasingly unsustainable due to the significant costs associated with training and deployment. Foundation models offer a more scalable alternative by enabling a single model to be adapted across multiple tasks through fine-tuning with limited samples. However, current foundation models mostly rely on large-scale Transformer architectures, resulting in computationally intensive models unsuitable for deployment on typical edge devices. This paper presents a lightweight foundation model based on simple Multi-Layer-Perceptron (MLP) encoders that independently process input patches. Our model supports 4 types of downstream tasks (long-range technology recognition, short-range technology recognition, modulation recognition and line-of-sight-detection) from multiple input types (IQ and CIR) and different sampling rates. We show that, unlike Transformers, which can exhibit performance drops as downstream tasks are added, our MLP model maintains robust generalization performance, achieving over 97% accurate fine-tuning results for previously unseen data classes. These results are achieved despite having only 21K trainable parameters, allowing an inference time of 0.33 ms on common edge devices, making the model suitable for constrained real-time deployments.
Abstract:Wireless Technology Recognition (WTR) is essential in modern communication systems, enabling efficient spectrum management and the seamless coexistence of diverse technologies. In real-world conditions, WTR solutions should be able to handle signals from various resources with different sampling rates, capturing devices, and frequency bands. However, traditional WTR methods, which rely on energy detection, Convolutional Neural Network (CNN) models, or Deep Learning (DL), lack the robustness and adaptability required to generalize across unseen environments, different sampling devices, and previously unencountered signal classes. In this work, we introduce a Transformer-based foundation model for WTR, trained in an unsupervised manner on large-scale, unlabeled wireless signal datasets. Foundation models are designed to learn general-purpose representations that transfer effectively across tasks and domains, allowing generalization towards new technologies and WTR sampling devices. Our approach leverages input patching for computational efficiency and incorporates a two-stage training pipeline: unsupervised pre-training followed by lightweight fine-tuning. This enables the model to generalize to new wireless technologies and environments using only a small number of labeled samples. Experimental results demonstrate that our model achieves superior accuracy across varying sampling rates and frequency bands while maintaining low computational complexity, supporting the vision of a reusable wireless foundation model adaptable to new technologies with minimal retraining.
Abstract:Cross-Technology Interference (CTI) poses challenges for the performance and robustness of wireless networks. There are opportunities for better cooperation if the spectral occupation and technology of the interference can be detected. Namely, this information can help the Orthogonal Frequency Division Multiple Access (OFDMA) scheduler in IEEE 802.11ax (Wi-Fi 6) to efficiently allocate resources to multiple users inthe frequency domain. This work shows that a single Channel State Information (CSI) snapshot, which is used for packet demodulation in the receiver, is enough to detect and classify the type of CTI on low-cost Wi-Fi 6 hardware. We show the classification accuracy of a small Convolutional Neural Network (CNN) for different Signal-to-Noise Ratio (SNR) and Signal-to-Interference Ratio (SIR) with simulated data, as well as using a wired and over-the-air test with a professional wireless connectivity tester, while running the inference on the low-cost device. Furthermore, we use openwifi, a full-stack Wi-Fi transceiver running on software-defined radio (SDR) available in the w-iLab.t testbed, as Access Point (AP) to implement a CTI-aware multi-user OFDMA scheduler when the clients send CTI detection feedback to the AP. We show experimentally that it can fully mitigate the 35% throughput loss caused by CTI when the AP applies the appropriate scheduling.




Abstract:This white paper discusses the role of large-scale AI in the telecommunications industry, with a specific focus on the potential of generative AI to revolutionize network functions and user experiences, especially in the context of 6G systems. It highlights the development and deployment of Large Telecom Models (LTMs), which are tailored AI models designed to address the complex challenges faced by modern telecom networks. The paper covers a wide range of topics, from the architecture and deployment strategies of LTMs to their applications in network management, resource allocation, and optimization. It also explores the regulatory, ethical, and standardization considerations for LTMs, offering insights into their future integration into telecom infrastructure. The goal is to provide a comprehensive roadmap for the adoption of LTMs to enhance scalability, performance, and user-centric innovation in telecom networks.
Abstract:One significant challenge in research is to collect a large amount of data and learn the underlying relationship between the input and the output variables. This paper outlines the process of collecting and validating a dataset designed to determine the angle of arrival (AoA) using Bluetooth low energy (BLE) technology. The data, collected in a laboratory setting, is intended to approximate real-world industrial scenarios. This paper discusses the data collection process, the structure of the dataset, and the methodology adopted for automating sample labeling for supervised learning. The collected samples and the process of generating ground truth (GT) labels were validated using the Texas Instruments (TI) phase difference of arrival (PDoA) implementation on the data, yielding a mean absolute error (MAE) at one of the heights without obstacles of $25.71^\circ$. The distance estimation on BLE was implemented using a Gaussian Process Regression algorithm, yielding an MAE of $0.174$m.
Abstract:Co-channel interference cancellation (CCI) is the process used to reduce interference from other signals using the same frequency channel, thereby enhancing the performance of wireless communication systems. An improvement to this approach is blind CCI, which reduces interference without relying on prior knowledge of the interfering signal characteristics. Recent work suggested using machine learning (ML) models for this purpose, but high-throughput ML solutions are still lacking, especially for edge devices with limited resources. This work explores the adaptation of U-Net Convolutional Neural Network models for high-throughput blind source separation. Our approach is established on architectural modifications, notably through quantization and the incorporation of depthwise separable convolution, to achieve a balance between computational efficiency and performance. Our results demonstrate that the proposed models achieve superior MSE scores when removing unknown interference sources from the signals while maintaining significantly lower computational complexity compared to baseline models. One of our proposed models is deeper and fully convolutional, while the other is shallower with a convolutional structure incorporating an LSTM. Depthwise separable convolution and quantization further reduce the memory footprint and computational demands, albeit with some performance trade-offs. Specifically, applying depthwise separable convolutions to the model with the LSTM results in only a 0.72% degradation in MSE score while reducing MACs by 58.66%. For the fully convolutional model, we observe a 0.63% improvement in MSE score with even 61.10% fewer MACs. Overall, our findings underscore the feasibility of using optimized machine-learning models for interference cancellation in devices with limited resources.
Abstract:Indoor positioning systems based on Ultra-wideband (UWB) technology are gaining recognition for their ability to provide cm-level localization accuracy. However, these systems often encounter challenges caused by dense multi-path fading, leading to positioning errors. To address this issue, in this letter, we propose a novel methodology for unsupervised anchor node selection using deep embedded clustering (DEC). Our approach uses an Auto Encoder (AE) before clustering, thereby better separating UWB features into separable clusters of UWB input signals. We furthermore investigate how to rank these clusters based on their cluster quality, allowing us to remove untrustworthy signals. Experimental results show the efficiency of our proposed method, demonstrating a significant 23.1% reduction in mean absolute error (MAE) compared to without anchor exclusion. Especially in the dense multi-path area, our algorithm achieves even more significant enhancements, reducing the MAE by 26.6% and the 95th percentile error by 49.3% compared to without anchor exclusion.
Abstract:Indoor positioning using UWB technology has gained interest due to its centimeter-level accuracy potential. However, multipath effects and non-line-of-sight conditions cause ranging errors between anchors and tags. Existing approaches for mitigating these ranging errors rely on collecting large labeled datasets, making them impractical for real-world deployments. This paper proposes a novel self-supervised deep reinforcement learning approach that does not require labeled ground truth data. A reinforcement learning agent uses the channel impulse response as a state and predicts corrections to minimize the error between corrected and estimated ranges. The agent learns, self-supervised, by iteratively improving corrections that are generated by combining the predictability of trajectories with filtering and smoothening. Experiments on real-world UWB measurements demonstrate comparable performance to state-of-the-art supervised methods, overcoming data dependency and lack of generalizability limitations. This makes self-supervised deep reinforcement learning a promising solution for practical and scalable UWB-ranging error correction.




Abstract:Due to their large bandwidth, relatively low cost, and robust performance, UWB radio chips can be used for a wide variety of applications, including localization, communication, and radar. This article offers an exhaustive survey of recent progress in UWB radar technology. The goal of this survey is to provide a comprehensive view of the technical fundamentals and emerging trends in UWB radar. Our analysis is categorized into multiple parts. Firstly, we explore the fundamental concepts of UWB radar technology from a technology and standardization point of view. Secondly, we examine the most relevant UWB applications and use cases, such as device-free localization, activity recognition, presence detection, and vital sign monitoring, discussing each time the bandwidth requirements, processing techniques, algorithms, latest developments, relevant example papers, and trends. Next, we steer readers toward relevant datasets and available radio chipsets. Finally, we discuss ongoing challenges and potential future research avenues. As such, this overview paper is designed to be a cornerstone reference for researchers charting the course of UWB radar technology over the last decade.