Abstract:Distributed MIMO (D-MIMO) has emerged as a key architecture for future sixth-generation (6G) networks, enabling cooperative transmission across spatially distributed access points (APs). However, most existing studies rely on idealized channel models and lack hardware validation, leaving a gap between algorithmic design and practical deployment. Meanwhile, recent advances in artificial intelligence (AI)-driven precoding have shown strong potential for learning nonlinear channel-to-precoder mappings, but their real-world deployment remains limited due to challenges in data collection and model generalization. This work presents a framework for implementing and validating an AI-based precoder on a D-MIMO testbed with hardware reciprocity calibration. A pre-trained graph neural network (GNN)-based model is fine-tuned using real-world channel state information (CSI) collected from the Techtile platform and evaluated under both interpolation and extrapolation scenarios before end-to-end validation. Experimental results demonstrate a 15.7% performance gain over the pre-trained model in the multi-user case after fine-tuning, while in the single-user scenario the model achieves near-maximum ratio transmission (MRT) performance with less than 0.7 bits/channel use degradation out of a total throughput of 5.19 bits/channel use on unseen positions. Further analysis confirms the data efficiency of real-world measurements, showing consistent gains with increasing training samples, and end-to-end validation verifies coherent power focusing comparable to MRT.
Abstract:Cell-free massive MIMO (CF-mMIMO) has emerged as a promising paradigm for delivering uniformly high-quality coverage in future wireless networks. To address the inherent challenges of precoding in such distributed systems, recent studies have explored the use of graph neural network (GNN)-based methods, using their powerful representation capabilities. However, these approaches have predominantly been trained and validated on synthetic datasets, leaving their generalizability to real-world propagation environments largely unverified. In this work, we initially pre-train the GNN using simulated channel state information (CSI) data, which incorporates standard propagation models and small-scale Rayleigh fading. Subsequently, we finetune the model on real-world CSI measurements collected from a physical testbed equipped with distributed access points (APs). To balance the retention of pre-trained features with adaptation to real-world conditions, we adopt a layer-freezing strategy during fine-tuning, wherein several GNN layers are frozen and only the later layers remain trainable. Numerical results demonstrate that the fine-tuned GNN significantly outperforms the pre-trained model, achieving an approximate 8.2 bits per channel use gain at 20 dB signal-to-noise ratio (SNR), corresponding to a 15.7 % improvement. These findings highlight the critical role of transfer learning and underscore the potential of GNN-based precoding techniques to effectively generalize from synthetic to real-world wireless environments.




Abstract:To keep supporting next-generation requirements, the radio access infrastructure will increasingly densify. Cell-free (CF) network architectures are emerging, combining dense deployments with extreme flexibility in allocating resources to users. In parallel, the Open Radio Access Networks (O-RAN) paradigm is transforming RAN towards an open, intelligent, virtualized, and fully interoperable architecture. This paradigm brings the needed flexibility and intelligent control opportunities for CF networking. In this paper, we document the current O-RAN terminology and contrast it with some common CF processing approaches. We then discuss the main O-RAN innovations and research challenges that remain to be solved.