Cell-free communication has the potential to significantly improve grant-free transmission in massive machine-type communication, wherein multiple access points jointly serve a large number of user equipments to improve coverage and spectral efficiency. In this paper, we propose a novel framework for joint active user detection (AUD), channel estimation (CE), and data detection (DD) for massive grant-free transmission in cell-free systems. We formulate an optimization problem for joint AUD, CE, and DD by considering both the sparsity of the data matrix, which arises from intermittent user activity, and the sparsity of the effective channel matrix, which arises from intermittent user activity and large-scale fading. We approximately solve this optimization problem with a box-constrained forward-backward splitting algorithm, which significantly improves AUD, CE, and DD performance. We demonstrate the effectiveness of the proposed framework through simulation experiments.
Channel charting is an emerging self-supervised method that maps channel state information (CSI) to a low-dimensional latent space, which represents pseudo-positions of user equipments (UEs). While this latent space preserves local geometry, i.e., nearby UEs are nearby in latent space, the pseudo-positions are in arbitrary coordinates and global geometry is not preserved. In order to enable channel charting in real-world coordinates, we propose a novel bilateration loss for multipoint wireless systems in which only the access point (AP) locations are known--no geometrical models or ground-truth UE position information is required. The idea behind this bilateration loss is to compare the received power at pairs of APs in order to determine whether a UE should be placed closer to one AP or the other in latent space. We demonstrate the efficacy of our method using channel vectors from a commercial ray-tracer.
MIMO processing enables jammer mitigation through spatial filtering, provided that the receiver knows the spatial signature of the jammer interference. Estimating this signature is easy for barrage jammers that transmit continuously and with static signature, but difficult for more sophisticated jammers: Smart jammers may deliberately suspend transmission when the receiver tries to estimate their spatial signature, they may use time-varying beamforming to continuously change their spatial signature, or they may stay mostly silent and jam only specific instants (e.g., transmission of control signals). To deal with such smart jammers, we propose MASH, the first method that indiscriminately mitigates all types of jammers: Assume that the transmitter and receiver share a common secret. Based on this secret, the transmitter embeds (with a linear time-domain transform) its signal in a secret subspace of a higher-dimensional space. The receiver applies a reciprocal linear transform to the receive signal, which (i) raises the legitimate transmit signal from its secret subspace and (ii) provably transforms any jammer into a barrage jammer, which makes estimation and mitigation via MIMO processing straightforward. We show the efficacy of MASH for data transmission in the massive multi-user MIMO uplink.
Channel charting is a recently proposed framework that applies dimensionality reduction to channel state information (CSI) in wireless systems with the goal of associating a pseudo-position to each mobile user in a low-dimensional space: the channel chart. Channel charting summarizes the entire CSI dataset in a self-supervised manner, which opens up a range of applications that are tied to user location. In this article, we introduce the theoretical underpinnings of channel charting and present an overview of recent algorithmic developments and experimental results obtained in the field. We furthermore discuss concrete application examples of channel charting to network- and user-related applications, and we provide a perspective on future developments and challenges as well as the role of channel charting in next-generation wireless networks.
Baseband processing algorithms often require knowledge of the noise power, signal power, or signal-to-noise ratio (SNR). In practice, these parameters are typically unknown and must be estimated. Furthermore, the mean-square error (MSE) is a desirable metric to be minimized in a variety of estimation and signal recovery algorithms. However, the MSE cannot directly be used as it depends on the true signal that is generally unknown to the estimator. In this paper, we propose novel blind estimators for the average noise power, average receive signal power, SNR, and MSE. The proposed estimators can be computed at low complexity and solely rely on the large-dimensional and sparse nature of the processed data. Our estimators can be used (i) to quickly track some of the key system parameters while avoiding additional pilot overhead, (ii) to design low-complexity nonparametric algorithms that require such quantities, and (iii) to accelerate more sophisticated estimation or recovery algorithms. We conduct a theoretical analysis of the proposed estimators for a Bernoulli complex Gaussian (BCG) prior, and we demonstrate their efficacy via synthetic experiments. We also provide three application examples that deviate from the BCG prior in millimeter-wave multi-antenna and cell-free wireless systems for which we develop nonparametric denoising algorithms that improve channel-estimation accuracy with a performance comparable to denoisers that assume perfect knowledge of the system parameters.
Orthogonal frequency-division multiplexing (OFDM) time-domain signals exhibit high peak-to-average (power) ratio (PAR), which requires linear radio-frequency chains to avoid an increase in error-vector magnitude (EVM) and out-of-band (OOB) emissions. In this paper, we propose a novel joint PAR reduction and precoding algorithm that relaxes these linearity requirements in massive multiuser (MU) multiple-input multiple-output (MIMO) wireless systems. Concretely, we develop a novel alternating projections method, which limits the PAR and transmit power increase while simultaneously suppressing MU interference. We provide a theoretical foundation of our algorithm and provide simulation results for a massive MU-MIMO-OFDM scenario. Our results demonstrate significant PAR reduction while limiting the transmit power, without causing EVM or OOB emissions.
Iterative detection and decoding (IDD) is known to achieve near-capacity performance in multi-antenna wireless systems. We propose deep-unfolded interleaved detection and decoding (DUIDD), a new paradigm that reduces the complexity of IDD while achieving even lower error rates. DUIDD interleaves the inner stages of the data detector and channel decoder, which expedites convergence and reduces complexity. Furthermore, DUIDD applies deep unfolding to automatically optimize algorithmic hyperparameters, soft-information exchange, message damping, and state forwarding. We demonstrate the efficacy of DUIDD using NVIDIA's Sionna link-level simulator in a 5G-near multi-user MIMO-OFDM wireless system with a novel low-complexity soft-input soft-output data detector, an optimized low-density parity-check decoder, and channel vectors from a commercial ray-tracer. Our results show that DUIDD outperforms classical IDD both in terms of block error rate and computational complexity.
Localization services for wireless devices play an increasingly important role and a plethora of emerging services and applications already rely on precise position information. Widely used on-device positioning methods, such as the global positioning system, enable accurate outdoor positioning and provide the users with full control over what services are allowed to access location information. To provide accurate positioning indoors or in cluttered urban scenarios without line-of-sight satellite connectivity, powerful off-device positioning systems, which process channel state information (CSI) with deep neural networks, have emerged recently. Such off-device positioning systems inherently link a user's data transmission with its localization, since accurate CSI measurements are necessary for reliable wireless communication -- this not only prevents the users from controlling who can access this information but also enables virtually everyone in the device's range to estimate its location, resulting in serious privacy and security concerns. We propose on-device attacks against off-device wireless positioning systems in multi-antenna orthogonal frequency-division multiplexing systems while minimizing the impact on quality-of-service, and we demonstrate their efficacy using measured datasets for outdoor and indoor scenarios. We also investigate defenses to counter such attack mechanisms, and we discuss the limitations and implications on protecting location privacy in future wireless communication systems.
Multi-antenna (MIMO) processing is a promising solution to the problem of jammer mitigation. Existing methods mitigate the jammer based on an estimate of its subspace (or receive statistics) acquired through a dedicated training phase. This strategy has two main drawbacks: (i) it reduces the communication rate since no data can be transmitted during the training phase and (ii) it can be evaded by smart or multi-antenna jammers that are quiet during the training phase or that dynamically change their subspace through time-varying beamforming. To address these drawbacks, we propose Joint jammer Mitigation and data Detection (JMD), a novel paradigm for MIMO jammer mitigation. The core idea is to estimate and remove the jammer interference subspace jointly with detecting the transmit data over multiple time slots. Doing so removes the need for a dedicated and rate-reducing training period while mitigating smart and dynamic multi-antenna jammers. We instantiate our paradigm with SANDMAN, a simple and practical algorithm for multi-user MIMO uplink JMD. Extensive simulations demonstrate the efficacy of JMD, and of SANDMAN in particular, for jammer mitigation.
Even though machine learning (ML) techniques are being widely used in communications, the question of how to train communication systems has received surprisingly little attention. In this paper, we show that the commonly used binary cross-entropy (BCE) loss is a sensible choice in uncoded systems, e.g., for training ML-assisted data detectors, but may not be optimal in coded systems. We propose new loss functions targeted at minimizing the block error rate and SNR de-weighting, a novel method that trains communication systems for optimal performance over a range of signal-to-noise ratios. The utility of the proposed loss functions as well as of SNR de-weighting is shown through simulations in NVIDIA Sionna.