In the evolution of 6th Generation (6G) technology, the emergence of cell-free networking presents a paradigm shift, revolutionizing user experiences within densely deployed networks where distributed access points collaborate. However, the integration of intelligent mechanisms is crucial for optimizing the efficiency, scalability, and adaptability of these 6G cell-free networks. One application aiming to optimize spectrum usage is Automatic Modulation Classification (AMC), a vital component for classifying and dynamically adjusting modulation schemes. This paper explores different distributed solutions for AMC in cell-free networks, addressing the training, computational complexity, and accuracy of two practical approaches. The first approach addresses scenarios where signal sharing is not feasible due to privacy concerns or fronthaul limitations. Our findings reveal that maintaining comparable accuracy is remarkably achievable, yet it comes with an increase in computational demand. The second approach considers a central model and multiple distributed models collaboratively classifying the modulation. This hybrid model leverages diversity gain through signal combining and requires synchronization and signal sharing. The hybrid model demonstrates superior performance, achieving a 2.5% improvement in accuracy with equivalent total computational load. Notably, the hybrid model distributes the computational load across multiple devices, resulting in a lower individual computational load.
Accurate network synchronization is a key enabler for services such as coherent transmission, cooperative decoding, and localization in distributed and cell-free networks. Unlike centralized networks, where synchronization is generally needed between a user and a base station, synchronization in distributed networks needs to be maintained between several cooperative devices, which is an inherently challenging task due to hardware imperfections and environmental influences on the clock, such as temperature. As a result, distributed networks have to be frequently synchronized, introducing a significant synchronization overhead. In this paper, we propose an online-LSTM-based model for clock skew and drift compensation, to elongate the period at which synchronization signals are needed, decreasing the synchronization overhead. We conducted comprehensive experimental results to assess the performance of the proposed model. Our measurement-based results show that the proposed model reduces the need for re-synchronization between devices by an order of magnitude, keeping devices synchronized with a precision of at least 10 microseconds with a probability 90%.