We propose new low-fidelity (LoFi) user equipment (UE) scheduling algorithms for multiuser multiple-input multiple-output (MIMO) wireless communication systems. The proposed methods rely on an efficient guess-and-check procedure that, given an objective function, performs paired comparisons between random subsets of UEs that should be scheduled in certain time slots. The proposed LoFi scheduling methods are computationally efficient, highly parallelizable, and gradient-free, which enables the use of almost arbitrary, non-differentiable objective functions. System simulations in a millimeter-wave (mmWave) multiuser MIMO scenario demonstrate that the proposed LoFi schedulers outperform a range of state-of-the-art user scheduling algorithms in terms of bit error-rate and/or computational complexity.
Iterative detection and decoding (IDD) is known to achieve near-capacity performance in multi-antenna wireless systems. We propose deep-unfolded interleaved detection and decoding (DUIDD), a new paradigm that reduces the complexity of IDD while achieving even lower error rates. DUIDD interleaves the inner stages of the data detector and channel decoder, which expedites convergence and reduces complexity. Furthermore, DUIDD applies deep unfolding to automatically optimize algorithmic hyperparameters, soft-information exchange, message damping, and state forwarding. We demonstrate the efficacy of DUIDD using NVIDIA's Sionna link-level simulator in a 5G-near multi-user MIMO-OFDM wireless system with a novel low-complexity soft-input soft-output data detector, an optimized low-density parity-check decoder, and channel vectors from a commercial ray-tracer. Our results show that DUIDD outperforms classical IDD both in terms of block error rate and computational complexity.
Even though machine learning (ML) techniques are being widely used in communications, the question of how to train communication systems has received surprisingly little attention. In this paper, we show that the commonly used binary cross-entropy (BCE) loss is a sensible choice in uncoded systems, e.g., for training ML-assisted data detectors, but may not be optimal in coded systems. We propose new loss functions targeted at minimizing the block error rate and SNR de-weighting, a novel method that trains communication systems for optimal performance over a range of signal-to-noise ratios. The utility of the proposed loss functions as well as of SNR de-weighting is shown through simulations in NVIDIA Sionna.
We present distributed methods for jointly optimizing Intelligent Reflecting Surface (IRS) phase-shifts and beamformers in a cellular network. The proposed schemes require knowledge of only the intra-cell training sequences and corresponding received signals without explicit channel estimation. Instead, an SINR objective is estimated via sample means and maximized directly. This automatically includes and mitigates both intra- and inter-cell interference provided that the uplink training is synchronized across cells. Different schemes are considered that limit the set of known training sequences from interferers. With MIMO links an iterative synchronous bi-directional training scheme jointly optimizes the IRS parameters with the beamformers and combiners. Simulation results show that the proposed distributed methods show a modest performance degradation compared to centralized channel estimation schemes, which estimate and exchange all cross-channels between cells, and perform significantly better than channel estimation schemes which ignore the inter-cell interference.
In this paper, we present a machine learning approach for estimating the number of incident wavefronts in a direction of arrival scenario. In contrast to previous works, a multilayer neural network with a cross-entropy objective is trained. Furthermore, we investigate an online training procedure that allows an adaption of the neural network to imperfections of an antenna array without explicitly calibrating the array manifold. We show via simulations that the proposed method outperforms classical model order selection schemes based on information criteria in terms of accuracy, especially for a small number of snapshots and at low signal-to-noise-ratios. Also, the online training procedure enables the neural network to adapt with only a few online training samples, if initialized by offline training on artificial data.