Cell-free massive multiple-input-multiple-output (CF-mMIMO) is a next-generation wireless access technology that offers superior coverage and spectral efficiency compared to conventional MIMO. With many future applications in unlicensed spectrum bands, networks will likely experience and may even be limited by out-of-system (OoS) interference. The OoS interference differs from the in-system interference from other serving users in that for OoS interference, the associated pilot signals are unknown or non-existent, which makes estimation of the OoS interferer channel difficult. In this paper, we propose a novel sequential algorithm for the suppression of OoS interference for uplink CF-mMIMO with a stripe (daisy-chain) topology. The proposed method has comparable performance to that of a fully centralized interference rejection combining algorithm but has substantially less fronthaul load requirements.
We consider massive multiple-input multiple-output (MIMO) systems in the presence of Cauchy noise. First, we focus on the channel estimation problem. In the standard massive MIMO setup, the users transmit orthonormal pilots during the training phase and the received signal at the base station is projected onto each pilot. This processing is optimum when the noise is Gaussian. We show that this processing is not optimal when the noise is Cauchy and as a remedy propose a channel estimation technique that operates on the raw received signal. Second, we derive uplink-downlink achievable rates in the presence of Cauchy noise for perfect and imperfect channel state information. Finally, we derive log-likelihood ratio expressions for soft bit detection for both uplink and downlink, and simulate coded bit-error-rate curves. In addition to this, we derive and compare the symbol detectors in the presence of both Gaussian and Cauchy noises. An important observation is that the detector constructed for Cauchy noise performs well with both Gaussian and Cauchy noises; on the other hand, the detector for Gaussian noise works poorly in the presence of Cauchy noise. That is, the Cauchy detector is robust against heavy-tailed noise, whereas the Gaussian detector is not.
Backscatter communication (BSC) is a promising solution for Internet-of-Things (IoT) connections due to its low-complexity, low-cost, and energy-efficient solution for sensors. There are several network infrastructure setups that can be used for BSC with IoT nodes/passive devices. One of them is a bistatic setup where there is a need for high dynamic range and high-resolution analog-to-digital converters at the reader side. In this paper, we investigate a bistatic BSC setup with multiple antennas. We propose a novel algorithm to suppress direct link interference between the carrier emitter (CE) and the reader using beamforming into the nullspace of the CE-reader direct link to decrease the dynamic range of the system and increase the detection performance of the backscatter device (BSD). Further, we derive a Neyman-Pearson (NP) test and an exact closed-form expression for its performance in the detection of the BSD. Finally, simulation results show that the dynamic range of the system is significantly decreased and the detection performance of the BSD is increased by the proposed algorithm compared to a system not using beamforming in the CE, which could then be used in a host of different practical fields such as agriculture, transportation, factories, hospitals, smart cities, and smart homes.
Backscatter communication (BC) is a promising technique for future Internet-of-Things (IoT) owing to its low complexity, low cost, and potential for energy-efficient operation in sensor networks. There are several network infrastructure setups that can be used for BC with IoT nodes. One of them is the bistatic setup where typically there is a need for high dynamic range and high-resolution analog-to-digital converters at the reader. In this paper, we investigate a bistatic BC setup with multiple antennas. We propose a novel transmission scheme, which includes a protocol for channel estimation at the carrier emitter (CE) as well as a transmit beamformer construction that suppresses the direct link interference between the two ends of a bistatic link (namely CE and reader), and increases the detection performance of the backscatter device (BD) symbol. Further, we derive a generalized log-likelihood ratio test (GLRT) to detect the symbol/presence of the BD. We also provide an iterative algorithm to estimate the unknown parameters in the GLRT. Finally, simulation results show that the required dynamic range of the system is significantly decreased, and the detection performance of the BD symbol is increased, by the proposed algorithm compared to a system not using beamforming at the CE.
In this work, we focus on the communication aspect of decentralized learning, which involves multiple agents training a shared machine learning model using decentralized stochastic gradient descent (D-SGD) over distributed data. In particular, we investigate the impact of broadcast transmission and probabilistic random access policy on the convergence performance of D-SGD, considering the broadcast nature of wireless channels and the link dynamics in the communication topology. Our results demonstrate that optimizing the access probability to maximize the expected number of successful links is a highly effective strategy for accelerating the system convergence.
It is well known that GNSS receivers are vulnerable to jamming and spoofing attacks, and numerous such incidents have been reported in the last decade all over the world. The notion of participatory sensing, or crowdsensing, is that a large ensemble of voluntary contributors provides measurements, rather than relying on a dedicated sensing infrastructure. The participatory sensing network under consideration in this work is based on GNSS receivers embedded in, for example, mobile phones. The provided measurements refer to the receiver-reported carrier-to-noise-density ratio ($C/N_0$) estimates or automatic gain control (AGC) values. In this work, we exploit $C/N_0$ measurements to locate a GNSS jammer, using multiple receivers in a crowdsourcing manner. We extend a previous jammer position estimator by only including data that is received during parts of the sensing period where jamming is detected by the sensor. In addition, we perform hardware testing for verification and evaluation of the proposed and compared state-of-the-art algorithms. Evaluations are performed using a Samsung S20+ mobile phone as participatory sensor and a Spirent GSS9000 GNSS simulator to generate GNSS and jamming signals. The proposed algorithm is shown to work well when using $C/N_0$ measurements and outperform the alternative algorithms in the evaluated scenarios, producing a median error of 50 meters when the pathloss exponent is 2. With higher pathloss exponents the error gets higher. The AGC output from the phone was too noisy and needs further processing to be useful for position estimation.
In this work, we consider a Federated Edge Learning (FEEL) system where training data are randomly generated over time at a set of distributed edge devices with long-term energy constraints. Due to limited communication resources and latency requirements, only a subset of devices is scheduled for participating in the local training process in every iteration. We formulate a stochastic network optimization problem for designing a dynamic scheduling policy that maximizes the time-average data importance from scheduled user sets subject to energy consumption and latency constraints. Our proposed algorithm based on the Lyapunov optimization framework outperforms alternative methods without considering time-varying data importance, especially when the generation of training data shows strong temporal correlation.
Antenna arrays can be either reciprocity calibrated (R-calibrated), which facilitates reciprocity-based beamforming, or fully calibrated (F-calibrated), which additionally facilitates transmission and reception in specific physical directions. We first expose, to provide context, the fundamental principles of over-the-air R- and F-calibration of distributed arrays. We then describe a new method for calibration of two arrays that are individually F-calibrated, such that the combined array becomes jointly F-calibrated.
Wireless communication technology has progressed dramatically over the past 25 years, in terms of societal adoption as well as technical sophistication. In 1998, mobile phones were still in the process of becoming compact and affordable devices that could be widely utilized in both developed and developing countries. There were "only" 300 million mobile subscribers in the world [1]. Cellular networks were among the first privatized telecommunication markets, and competition turned the devices into fashion accessories with attractive designs that could be individualized. The service was circumscribed to telephony and text messaging, but it was groundbreaking in that, for the first time, telecommunication was between people rather than locations. Wireless networks have changed dramatically over the past few decades, enabling this revolution in service provisioning and making it possible to accommodate the ensuing dramatic growth in traffic. There are many contributing components, including new air interfaces for faster transmission, channel coding for enhanced reliability, improved source compression to remove redundancies, and leaner protocols to reduce overheads. Signal processing is at the core of these improvements, but nowhere has it played a bigger role than in the development of multiantenna communication. This article tells the story of how major signal processing advances have transformed the early multiantenna concepts into mainstream technology over the past 25 years. The story therefore begins somewhat arbitrarily in 1998. A broad account of the state-of-the-art signal processing techniques for wireless systems by 1998 can be found in [2], and its contrast with recent textbooks such as [3]-[5] reveals the dramatic leap forward that has taken place in the interim.
Fifth generation (5G) mobile communication systems have entered the stage of commercial development, providing users with new services and improved user experiences as well as offering a host of novel opportunities to various industries. However, 5G still faces many challenges. To address these challenges, international industrial, academic, and standards organizations have commenced research on sixth generation (6G) wireless communication systems. A series of white papers and survey papers have been published, which aim to define 6G in terms of requirements, application scenarios, key technologies, etc. Although ITU-R has been working on the 6G vision and it is expected to reach a consensus on what 6G will be by mid-2023, the related global discussions are still wide open and the existing literature has identified numerous open issues. This paper first provides a comprehensive portrayal of the 6G vision, technical requirements, and application scenarios, covering the current common understanding of 6G. Then, a critical appraisal of the 6G network architecture and key technologies is presented. Furthermore, existing testbeds and advanced 6G verification platforms are detailed for the first time. In addition, future research directions and open challenges are identified for stimulating the on-going global debate. Finally, lessons learned to date concerning 6G networks are discussed.