Fronthaul quantization causes a significant distortion in cell-free massive MIMO networks. Due to the limited capacity of fronthaul links, information exchange among access points (APs) must be quantized significantly. Furthermore, the complexity of the multiplication operation in the base-band processing unit increases with the number of bits of the operands. Thus, quantizing the APs' signal vector reduces the complexity of signal estimation in the base-band processing unit. Most recent works consider the direct quantization of the received signal vectors at each AP without any pre-processing. However, the signal vectors received at different APs are correlated mutually (inter-AP correlation) and also have correlated dimensions (intra-AP correlation). Hence, cooperative quantization of APs fronthaul can help to efficiently use the quantization bits at each AP and further reduce the distortion imposed on the quantized vector at the APs. This paper considers a daisy chain fronthaul and three different processing sequences at each AP. We show that 1) de-correlating the received signal vector at each AP from the corresponding vectors of the previous APs (inter-AP de-correlation) and 2) de-correlating the dimensions of the received signal vector at each AP (intra-AP de-correlation) before quantization helps to use the quantization bits at each AP more efficiently than directly quantizing the received signal vector without any pre-processing and consequently, improves the bit error rate (BER) and normalized mean square error (NMSE) of users signal estimation.
Cell-Free Massive MIMO (CF mMIMO) has emerged as a potential enabler for future networks. It has been shown that these networks are much more energy-efficient than classical cellular systems when they are serving users at peak capacity. However, these CF mMIMO networks are designed for peak traffic loads, and when this is not the case, they are significantly over-dimensioned and not at all energy efficient. To this end, Adaptive Access Point (AP) ON/OFF Switching (ASO) strategies have been developed to save energy when the network is not at peak traffic loads by putting unnecessary APs to sleep. Unfortunately, the existing strategies rely on measuring channel state information between every user and every access point, resulting in significant measurement energy consumption overheads. Furthermore, the current state-of-art approach has a computational complexity that scales exponentially with the number of APs. In this work, we present a novel convex feasibility testing method that allows checking per-user Quality-of-Service (QoS) requirements without necessarily considering all possible access point activations. We then propose an iterative algorithm for activating access points until all users' requirements are fulfilled. We show that our method has comparable performance to the optimal solution whilst avoiding solving costly mixed-integer problems and measuring channel state information on only a limited subset of APs.
Galileo is the first global navigation satellite system to authenticate their civilian signals through the Open Service Galileo Message Authentication (OSNMA) protocol. However, OSNMA delays the time to obtain a first position and time fix, the so-called Time To First Authentication Fix (TTFAF). Reducing the TTFAF as much as possible is crucial to integrate the technology seamlessly into the current products. In the cases where the receiver already has cryptographic data available, the so-called hot start mode and focus of this article, the currently available implementations achieve an average TTFAF of around 100 seconds in ideal environments. In this work, we dissect the TTFAF process, propose two main optimizations to reduce the TTFAF, and benchmark them in three distinct scenarios (open-sky, soft urban, and hard urban) with recorded real data. Moreover, we evaluate the optimizations using the synthetic scenario from the official OSNMA test vectors. The first block of optimizations centers on extracting as much information as possible from broken sub-frames by processing them at page level and combining redundant data from multiple satellites. The second block of optimizations aims to reconstruct missed navigation data by using fields in the authentication tags belonging to the same sub-frame as the authentication key. Combining both optimizations improves the TTFAF substantially for all considered scenarios. We obtain an average TTFAF of 60.9 and 68.8 seconds for the test vectors and the open-sky scenario, respectively, with a best-case of 44.0 seconds in both. Likewise, the urban scenarios see a drastic reduction of the average TTFAF between the non-optimized and optimized cases, from 127.5 to 87.5 seconds in the soft urban scenario and from 266.1 to 146.1 seconds in the hard urban scenario. These optimizations are available as part of the open-source OSNMAlib library on GitHub.
In the evolution of 6th Generation (6G) technology, the emergence of cell-free networking presents a paradigm shift, revolutionizing user experiences within densely deployed networks where distributed access points collaborate. However, the integration of intelligent mechanisms is crucial for optimizing the efficiency, scalability, and adaptability of these 6G cell-free networks. One application aiming to optimize spectrum usage is Automatic Modulation Classification (AMC), a vital component for classifying and dynamically adjusting modulation schemes. This paper explores different distributed solutions for AMC in cell-free networks, addressing the training, computational complexity, and accuracy of two practical approaches. The first approach addresses scenarios where signal sharing is not feasible due to privacy concerns or fronthaul limitations. Our findings reveal that maintaining comparable accuracy is remarkably achievable, yet it comes with an increase in computational demand. The second approach considers a central model and multiple distributed models collaboratively classifying the modulation. This hybrid model leverages diversity gain through signal combining and requires synchronization and signal sharing. The hybrid model demonstrates superior performance, achieving a 2.5% improvement in accuracy with equivalent total computational load. Notably, the hybrid model distributes the computational load across multiple devices, resulting in a lower individual computational load.
This paper investigates the use of Neural Network (NN) nonlinear modelling for Power Amplifier (PA) linearization in the Walsh-Hadamard transceiver architecture. This novel architecture has recently been proposed for ultra-high bandwidth systems to reduce the transceiver power consumption by extensive parallelization of the digital baseband hardware. The parallelization is achieved by replacing two-dimensional quadrature modulation with multi-dimensional Walsh-Hadamard modulation. The open research question for this architecture is whether conventional baseband signal processing algorithms can be similarly parallelized while retaining their performance. A key baseband algorithm, digital predistortion using NN models for PA linearization, will be adapted to the parallel Walsh architecture. A straighforward parallelization of the state-of-the-art NN architecture is extended with a cross-domain Knowledge Distillation pre-training method to achieve linearization performance on par with the quadrature implementation. This result paves the way for the entire baseband processing chain to be adapted into ultra-high bandwidth, low-power Walsh transceivers.
Cell-free massive multiple-input multiple-output (MIMO) is an emerging technology that will reshape the architecture of next-generation networks. This paper considers the sequential fronthaul, whereby the access points (APs) are connected in a daisy chain topology with multiple sequential processing stages. With this sequential processing in the uplink, each AP refines users' signal estimates received from the previous AP based on its own local received signal vector. While this processing architecture has been shown to achieve the same performance as centralized processing, the impact of the limited memory capacity at the APs on the store and forward processing architecture is yet to be analyzed. Thus, we model the received signal vector compression using rate-distortion theory to demonstrate the effect of limited memory capacity on the optimal number of APs in the daisy chain fronthaul. Without this memory constraint, more geographically distributed antennas alleviate the adverse effect of large-scale fading on the signal-to-interference-plus-noise-ratio (SINR). However, we show that in case of limited memory capacity at each AP, the memory capacity to store the received signal vectors at the final AP of this fronthaul becomes a limiting factor. In other words, we show that when deciding on the number of APs to distribute the antennas, there is an inherent trade-off between more macro-diversity and compression noise power on the stored signal vectors at the APs. Hence, the available memory capacity at the APs significantly influences the optimal number of APs in the fronthaul.
The inherent limitations in scaling up ground infrastructure for future wireless networks, combined with decreasing operational costs of aerial and space networks, are driving considerable research interest in multisegment ground-air-space (GAS) networks. In GAS networks, where ground and aerial users share network resources, ubiquitous and accurate user localization becomes indispensable, not only as an end-user service but also as an enabler for location-aware communications. This breaks the convention of having localization as a byproduct in networks primarily designed for communications. To address these imperative localization needs, the design and utilization of ground, aerial, and space anchors require thorough investigation. In this tutorial, we provide an in-depth systemic analysis of the radio localization problem in GAS networks, considering ground and aerial users as targets to be localized. Starting from a survey of the most relevant works, we then define the key characteristics of anchors and targets in GAS networks. Subsequently, we detail localization fundamentals in GAS networks, considering 3D positions and orientations. Afterward, we thoroughly analyze radio localization systems in GAS networks, detailing the system model, design aspects, and considerations for each of the three GAS anchors. Preliminary results are presented to provide a quantifiable perspective on key design aspects in GAS-based localization scenarios. We then identify the vital roles 6G enablers are expected to play in radio localization in GAS networks.
This paper presents a novel pipeline for vital sign monitoring using a 26 GHz multi-beam communication testbed. In context of Joint Communication and Sensing (JCAS), the advanced communication capability at millimeter-wave bands is comparable to the radio resource of radars and is promising to sense the surrounding environment. Being able to communicate and sense the vital sign of humans present in the environment will enable new vertical services of telecommunication, i.e., remote health monitoring. The proposed processing pipeline leverages spatially orthogonal beams to estimate the vital sign - breath rate and heart rate - of single and multiple persons in static scenarios from the raw Channel State Information samples. We consider both monostatic and bistatic sensing scenarios. For monostatic scenario, we employ the phase time-frequency calibration and Discrete Wavelet Transform to improve the performance compared to the conventional Fast Fourier Transform based methods. For bistatic scenario, we use K-means clustering algorithm to extract multi-person vital signs due to the distinct frequency-domain signal feature between single and multi-person scenarios. The results show that the estimated breath rate and heart rate reach below 2 beats per minute (bpm) error compared to the reference captured by on-body sensor for the single-person monostatic sensing scenario with body-transceiver distance up to 2 m, and the two-person bistatic sensing scenario with BS-UE distance up to 4 m. The presented work does not optimize the OFDM waveform parameters for sensing; it demonstrates a promising JCAS proof-of-concept in contact-free vital sign monitoring using mmWave multi-beam communication systems.
Unmanned aerial vehicles (UAVs) have gained popularity in the communications research community because of their versatility in placement and potential to extend the functions of communication networks. However, there remains still a gap in existing works regarding detailed and measurement-verified air-to-ground (A2G) Massive Multi-Input Multi-Output (MaMIMO) channel characteristics which play an important role in realistic deployment. In this paper, we first design a UAV MaMIMO communication platform for channel acquisition. We then use the testbed to measure uplink Channel State Information (CSI) between a rotary-wing drone and a 64-element MaMIMO base station (BS). For characterization, we focus on multidimensional channel stationarity which is a fundamental metric in communication systems. Afterward, we present measurement results and analyze the channel statistics based on power delay profiles (PDPs) considering space, time, and frequency domains. We propose the stationary angle (SA) as a supplementary metric of stationary distance (SD) in the time domain. We analyze the coherence bandwidth and RMS delay spread for frequency stationarity. Finally, spatial correlations between elements are analyzed to indicate the spatial stationarity of the array. The space-time-frequency channel stationary characterization will benefit the physical layer design of MaMIMO-UAV communications.
The far-field channel model has historically been used in wireless communications due to the simplicity of mathematical modeling and convenience for algorithm design, and its validity for relatively small array apertures. With the need for high data rates, low latency, and ubiquitous connectivity in the sixth generation (6G) of communication systems, new technology enablers such as extremely large antenna arrays (ELAA), reconfigurable intelligent surfaces (RISs), and distributed multiple-input-multiple-output (D-MIMO) systems will be adopted. These enablers not only aim to improve communication services but also have an impact on localization and sensing (L\&S), which are expected to be integrated into future wireless systems. Despite appearing in different scenarios and supporting different frequency bands, these enablers share the so-called near-field (NF) features, which will provide extra geometric information. In this work, starting from a brief description of NF channel features, we highlight the opportunities and challenges for 6G NF L\&S.