Cell-free massive multiple-input multiple-output (MIMO) is a promising technology for next-generation communication systems. This work proposes a novel partially coherent (PC) transmission framework to cope with the challenge of phase misalignment among the access points (APs), which is important for unlocking the full potential of cell-free massive MIMO technology. With the PC operation, the APs are only required to be phase-aligned within clusters. Each cluster transmits the same data stream towards each user equipment (UE), while different clusters send different data streams. We first propose a novel algorithm to group APs into clusters such that the distance between two APs is always smaller than a reference distance ensuring the phase alignment of these APs. Then, we propose new algorithms that optimize the combining at UEs and precoding at APs to maximize the downlink sum data rates. We also propose a novel algorithm for data stream allocation to further improve the sum data rate of the PC operation. Numerical results show that the PC operation using the proposed framework with a sufficiently small reference distance can offer a sum rate close to the sum rate of the ideal fully coherent (FC) operation that requires network-wide phase alignment. This demonstrates the potential of PC operation in practical deployments of cell-free massive MIMO networks.
Consensus-based decentralized stochastic gradient descent (D-SGD) is a widely adopted algorithm for decentralized training of machine learning models across networked agents. A crucial part of D-SGD is the consensus-based model averaging, which heavily relies on information exchange and fusion among the nodes. Specifically, for consensus averaging over wireless networks, communication coordination is necessary to determine when and how a node can access the channel and transmit (or receive) information to (or from) its neighbors. In this work, we propose $\texttt{BASS}$, a broadcast-based subgraph sampling method designed to accelerate the convergence of D-SGD while considering the actual communication cost per iteration. $\texttt{BASS}$ creates a set of mixing matrix candidates that represent sparser subgraphs of the base topology. In each consensus iteration, one mixing matrix is sampled, leading to a specific scheduling decision that activates multiple collision-free subsets of nodes. The sampling occurs in a probabilistic manner, and the elements of the mixing matrices, along with their sampling probabilities, are jointly optimized. Simulation results demonstrate that $\texttt{BASS}$ enables faster convergence with fewer transmission slots compared to existing link-based scheduling methods. In conclusion, the inherent broadcasting nature of wireless channels offers intrinsic advantages in accelerating the convergence of decentralized optimization and learning.
Distributed antennas must be phase-calibrated (phase-synchronized) for certain operations, such as reciprocity-based joint coherent downlink beamforming, to work. We use rigorous signal processing tools to analyze the accuracy of calibration protocols that are based on over-the-air measurements between antennas, with a focus on scalability aspects for large systems. We show that (i) for some who-measures-on-whom topologies, the errors in the calibration process are unbounded when the network grows; and (ii) despite that conclusion, it is optimal -- irrespective of the topology -- to solve a single calibration problem for the entire system and use the result everywhere to support the beamforming. The analyses are exemplified by investigating specific topologies, including lines, rings, and two-dimensional surfaces.
Wirelessly connected devices can collaborately train a machine learning model using federated learning, where the aggregation of model updates occurs using over-the-air computation. Carrier frequency offset caused by imprecise clocks in devices will cause the phase of the over-the-air channel to drift randomly, such that late symbols in a coherence block are transmitted with lower quality than early symbols. To mitigate the effect of degrading symbol quality, we propose a scheme where one of the permutations Roll, Flip and Sort are applied on gradients before transmission. Through simulations we show that the permutations can both improve and degrade learning performance. Furthermore, we derive the expectation and variance of the gradient estimate, which is shown to grow exponentially with the number of symbols in a coherence block.
Ultra-dense cell-free massive multiple-input multiple-output (CF-MMIMO) has emerged as a promising technology expected to meet the future ubiquitous connectivity requirements and ever-growing data traffic demands in 6G. This article provides a contemporary overview of ultra-dense CF-MMIMO networks, and addresses important unresolved questions on their future deployment. We first present a comprehensive survey of state-of-the-art research on CF-MMIMO and ultra-dense networks. Then, we discuss the key challenges of CF-MMIMO under ultra-dense scenarios such as low-complexity architecture and processing, low-complexity/scalable resource allocation, fronthaul limitation, massive access, synchronization, and channel acquisition. Finally, we answer key open questions, considering different design comparisons and discussing suitable methods dealing with the key challenges of ultra-dense CF-MMIMO. The discussion aims to provide a valuable roadmap for interesting future research directions in this area, facilitating the development of CF-MMIMO MIMO for 6G.
Over-the-Air (OtA) computation is a newly emerged concept for computing functions of data from distributed nodes by taking advantage of the wave superposition property of wireless channels. Despite its advantage in communication efficiency, OtA computation is associated with significant security and privacy concerns that have so far not been thoroughly investigated, especially in the case of active attacks. In this paper, we propose and evaluate a detection scheme against active attacks in OtA computation systems. More explicitly, we consider an active attacker which is an external node sending random or misleading data to alter the aggregated data received by the server. To detect the presence of the attacker, in every communication period, legitimate users send some dummy samples in addition to the real data. We propose a detector design that relies on the existence of a shared secret only known by the legitimate users and the server, that can be used to hide the transmitted signal in a secret subspace. After the server projects the received vector back to the original subspace, the dummy samples can be used to detect active attacks. We show that this design achieves good detection performance for a small cost in terms of channel resources.
In distributed massive multiple-input multiple-output (MIMO) systems, multiple geographically separated access points (APs) communicate simultaneously with a user, leveraging the benefits of multi-antenna coherent MIMO processing and macro-diversity gains from the distributed setups. However, time and frequency synchronization of the multiple APs is crucial to achieve good performance and enable joint precoding. In this paper, we analyze the synchronization requirement among multiple APs from a reciprocity perspective, taking into account the multiplicative impairments caused by mismatches in radio frequency (RF) hardware. We demonstrate that a phase calibration of reciprocity-calibrated APs is sufficient for the joint coherent transmission of data to the user. To achieve synchronization, we propose a novel over-the-air synchronization protocol, named BeamSync, to calibrate the geographically separated APs without sending any measurements to the central processing unit (CPU) through fronthaul. We show that sending the synchronization signal in the dominant direction of the channel between APs is optimal. Additionally, we derive the optimal phase and frequency offset estimators. Simulation results indicate that the proposed BeamSync method enhances performance by 3 dB when the number of antennas at the APs is doubled. Moreover, the method performs well compared to traditional beamforming techniques.
We propose a novel resource efficient analog over-the-air (OTA) computation framework to address the demanding requirements of the uplink (UL) fronthaul between the access points (APs) and the central processing unit (CPU) in cell-free massive multiple-input multiple-output (MIMO) systems. We discuss the drawbacks of the wired and wireless fronthaul solutions, and show that our proposed mechanism is efficient and scalable as the number of APs increases. We present the transmit precoding and two-phase power assignment strategies at the APs to coherently combine the signals OTA in a spectrally efficient manner. We derive the statistics of the APs locally available signals which enable us to to obtain the analytical expressions for the Bayesian and classical estimators of the OTA combined signals. We empirically evaluate the normalized mean square error (NMSE), symbol error rate (SER), and the coded bit error rate (BER) of our developed solution and benchmark against the state-of-the-art wired fronthaul based system
This work centers on the communication aspects of decentralized learning over wireless networks, using consensus-based decentralized stochastic gradient descent (D-SGD). Considering the actual communication cost or delay caused by in-network information exchange in an iterative process, our goal is to achieve fast convergence of the algorithm measured by improvement per transmission slot. We propose BASS, an efficient communication framework for D-SGD over wireless networks with broadcast transmission and probabilistic subgraph sampling. In each iteration, we activate multiple subsets of non-interfering nodes to broadcast model updates to their neighbors. These subsets are randomly activated over time, with probabilities reflecting their importance in network connectivity and subject to a communication cost constraint (e.g., the average number of transmission slots per iteration). During the consensus update step, only bi-directional links are effectively preserved to maintain communication symmetry. In comparison to existing link-based scheduling methods, the inherent broadcasting nature of wireless channels offers intrinsic advantages in speeding up convergence of decentralized learning by creating more communicated links with the same number of transmission slots.
We consider a robust beamforming problem where large amount of downlink (DL) channel state information (CSI) data available at a multiple antenna access point (AP) is used to improve the link quality to a user equipment (UE) for beyond-5G and 6G applications such as environment-specific initial access (IA) or wireless power transfer (WPT). As the DL CSI available at the current instant may be imperfect or outdated, we propose a novel scheme which utilizes the (unknown) correlation between the antenna domain and physical domain to localize the possible future UE positions from the historical CSI database. Then, we develop a codebook design procedure to maximize the minimum sum beamforming gain to that localized CSI neighborhood. We also incorporate a UE specific parameter to enlarge the neighborhood to robustify the link further. We adopt an indoor channel model to demonstrate the performance of our solution, and benchmark against a usually optimal (but now sub-optimal due to outdated CSI) maximum ratio transmission (MRT) and a subspace based method.We numerically show that our algorithm outperforms the other methods by a large margin. This shows that customized environment-specific solutions are important to solve many future wireless applications, and we have paved the way to develop further data-driven approaches.