This paper presents an experimental validation for prediction of rare fading events using channel distribution information (CDI) maps that predict channel statistics from measurements acquired at surrounding locations using spatial interpolation. Using experimental channel measurements from 127 locations, we demonstrate the use case of providing statistical guarantees for rate selection in ultra-reliable low-latency communication (URLLC) using CDI maps. By using only the user location and the estimated map, we are able to meet the desired outage probability with a probability between 93.6-95.6% targeting 95%. On the other hand, a model-based baseline scheme that assumes Rayleigh fading meets the target outage requirement with a probability of 77.2%. The results demonstrate the practical relevance of CDI maps for resource allocation in URLLC.
5G has expanded the traditional focus of wireless systems to embrace two new connectivity types: ultra-reliable low latency and massive communication. The technology context at the dawn of 6G is different from the past one for 5G, primarily due to the growing intelligence at the communicating nodes. This has driven the set of relevant communication problems beyond reliable transmission towards semantic and pragmatic communication. This paper puts the evolution of low-latency and massive communication towards 6G in the perspective of these new developments. At first, semantic/pragmatic communication problems are presented by drawing parallels to linguistics. We elaborate upon the relation of semantic communication to the information-theoretic problems of source/channel coding, while generalized real-time communication is put in the context of cyber-physical systems and real-time inference. The evolution of massive access towards massive closed-loop communication is elaborated upon, enabling interactive communication, learning, and cooperation among wireless sensors and actuators.
Taking inspiration from linguistics, the communications theoretical community has recently shown a significant recent interest in pragmatic , or goal-oriented, communication. In this paper, we tackle the problem of pragmatic communication with multiple clients with different, and potentially conflicting, objectives. We capture the goal-oriented aspect through the metric of Value of Information (VoI), which considers the estimation of the remote process as well as the timing constraints. However, the most common definition of VoI is simply the Mean Square Error (MSE) of the whole system state, regardless of the relevance for a specific client. Our work aims to overcome this limitation by including different summary statistics, i.e., value functions of the state, for separate clients, and a diversified query process on the client side, expressed through the fact that different applications may request different functions of the process state at different times. A query-aware Deep Reinforcement Learning (DRL) solution based on statically defined VoI can outperform naive approaches by 15-20%.
Sensing is envisioned as a key network function of the 6G mobile networks. Artificial intelligence (AI)-empowered sensing fuses features of multiple sensing views from devices distributed in edge networks for the edge server to perform accurate inference. This process, known as multi-view pooling, creates a communication bottleneck due to multi-access by many devices. To alleviate this issue, we propose a task-oriented simultaneous access scheme for distributed sensing called Over-the-Air Pooling (AirPooling). The existing Over-the-Air Computing (AirComp) technique can be directly applied to enable Average-AirPooling by exploiting the waveform superposition property of a multi-access channel. However, despite being most popular in practice, the over-the-air maximization, called Max-AirPooling, is not AirComp realizable as AirComp addresses a limited subset of functions. We tackle the challenge by proposing the novel generalized AirPooling framework that can be configured to support both Max- and Average-AirPooling by controlling a configuration parameter. The former is realized by adding to AirComp the designed pre-processing at devices and post-processing at the server. To characterize the end-to-end sensing performance, the theory of classification margin is applied to relate the classification accuracy and the AirPooling error. Furthermore, the analysis reveals an inherent tradeoff of Max-AirPooling between the accuracy of the pooling-function approximation and the effectiveness of noise suppression. Using the tradeoff, we optimize the configuration parameter of Max-AirPooling, yielding a sub-optimal closed-form method of adaptive parametric control. Experimental results obtained on real-world datasets show that AirPooling provides sensing accuracies close to those achievable by the traditional digital air interface but dramatically reduces the communication latency.
This paper proposes exploiting the spatial correlation of wireless channel statistics beyond the conventional received signal strength maps by constructing statistical radio maps to predict any relevant channel statistics to assist communications. Specifically, from stored channel samples acquired by previous users in the network, we use Gaussian processes (GPs) to estimate quantiles of the channel distribution at a new position using a non-parametric model. This prior information is then used to select the transmission rate for some target level of reliability. The approach is tested with synthetic data, simulated from urban micro-cell environments, highlighting how the proposed solution helps to reduce the training estimation phase, which is especially attractive for the tight latency constraints inherent to ultra-reliable low-latency (URLLC) deployments.
Wireless applications that use high-reliability low-latency links depend critically on the capability of the system to predict link quality. This dependence is especially acute at the high carrier frequencies used by mmWave and THz systems, where the links are susceptible to blockages. Predicting blockages with high reliability requires a large number of data samples to train effective machine learning modules. With the aim of mitigating data requirements, we introduce a framework based on meta-learning, whereby data from distinct deployments are leveraged to optimize a shared initialization that decreases the data set size necessary for any new deployment. Predictors of two different events are studied: (1) at least one blockage occurs in a time window, and (2) the link is blocked for the entire time window. The results show that an RNN-based predictor trained using meta-learning is able to predict blockages after observing fewer samples than predictors trained using standard methods.