In the design of wireless receivers, DNNs can be combined with traditional model-based receiver algorithms to realize modular hybrid model-based/data-driven architectures that can account for domain knowledge. Such architectures typically include multiple modules, each carrying out a different functionality. Conventionally trained DNN-based modules are known to produce poorly calibrated, typically overconfident, decisions. This implies that an incorrect decision may propagate through the architecture without any indication of its insufficient accuracy. To address this problem, we present a novel combination of Bayesian learning with hybrid model-based/data-driven architectures for wireless receiver design. The proposed methodology, referred to as modular model-based Bayesian learning, results in better calibrated modules, improving accuracy and calibration of the overall receiver. We demonstrate this approach for the recently proposed DeepSIC MIMO receiver, showing significant improvements with respect to the state-of-the-art learning methods.
Hybrid precoding plays a key role in realizing massive multiple-input multiple-output (MIMO) transmitters with controllable cost. MIMO precoders are required to frequently adapt based on the variations in the channel conditions. In hybrid MIMO, here precoding is comprised of digital and analog beamforming, such an adaptation involves lengthy optimization and depends on accurate channel state information (CSI). This affects the spectral efficiency when the channel varies rapidly and when operating with noisy CSI. In this work we employ deep learning techniques to learn how to rapidly and robustly optimize hybrid precoders, while being fully interpretable. We leverage data to learn iteration-dependent hyperparameter settings of projected gradient sum-rate optimization with a predefined number of iterations. The algorithm maps channel realizations into hybrid precoding settings while preserving the interpretable flow of the optimizer and improving its convergence speed. To cope with noisy CSI, we learn to optimize the minimal achievable sum-rate among all tolerable errors, proposing a robust hybrid precoding based on the projected conceptual mirror prox minimax optimizer. Numerical results demonstrate that our approach allows using over ten times less iterations compared to that required by conventional optimization with shared hyperparameters, while achieving similar and even improved sum-rate performance.
Multiple-input multiple-output (MIMO) systems exploit spatial diversity to facilitate multi-user communications with high spectral efficiency by beamforming. As MIMO systems utilize multiple antennas and radio frequency (RF) chains, they are typically costly to implement and consume high power. A common method to reduce the cost of MIMO receivers is utilizing less RF chains than antennas by employing hybrid analog/digital beamforming (HBF). However, the added analog circuitry involves active components whose consumed power may surpass that saved in RF chain reduction. An additional method to realize power-efficient MIMO systems is to use low-resolution analog-to-digital converters (ADCs), which typically compromises signal recovery accuracy. In this work, we propose a power-efficient hybrid MIMO receiver with low-quantization rate ADCs, by jointly optimizing the analog and digital processing in a hardware-oriented manner using task-specific quantization techniques. To mitigate power consumption on the analog front-end, we utilize efficient analog hardware architecture comprised of sparse low-resolution vector modulators, while accounting for their properties in design to maintain recovery accuracy and mitigate interferers in congested environments. To account for common mismatches induced by non-ideal hardware and inaccurate channel state information, we propose a robust mismatch aware design. Supported by numerical simulations and power analysis, our power-efficient MIMO receiver achieves comparable signal recovery performance to power-hungry fully-digital MIMO receivers using high-resolution ADCs. Furthermore, our receiver outperforms the task-agnostic HBF receivers with low-rate ADCs in recovery accuracy at lower power and successfully copes with hardware mismatches.
Integrated sensing and communications (ISAC) are envisioned to be an integral part of future wireless networks, especially when operating at the millimeter-wave (mmWave) and terahertz (THz) frequency bands. However, establishing wireless connections at these high frequencies is quite challenging, mainly due to the penetrating pathloss that prevents reliable communication and sensing. Another emerging technology for next-generation wireless systems is reconfigurable intelligent surfaces (RISs), which are capable of modifying harsh propagation environments. RISs are the focus of growing research and industrial attention, bringing forth the vision of smart and programmable signal propagation environments. In this article, we provide a tutorial-style overview of the applications and benefits of RISs for sensing functionalities in general, and for ISAC systems in particular. We highlight the potential advantages when fusing these two emerging technologies, and identify for the first time that: i) joint sensing and communications designs are most beneficial when the channels referring to these operations are coupled, and that ii) RISs offer means for controlling this beneficial coupling. The usefulness of RIS-aided ISAC goes beyond the individual obvious gains of each of these technologies in both performance and power efficiency. We also discuss the main signal processing challenges and future research directions which arise from the fusion of these two emerging technologies.
Sixth generation (6G) cellular communications are expected to support enhanced wireless localization capabilities. The widespread deployment of large arrays and high-frequency bandwidths give rise to new considerations for localization applications. First, emerging antenna architectures, such as dynamic metasurface antennas (DMAs), are expected to be frequently utilized thanks to the achievable high angular resolution and low hardware complexity. Further, wireless localization is likely to take place in the radiating near-field (Fresnel) region, which provides new degrees of freedom, because of the adoption of arrays with large apertures. While current studies mostly focus on the use of costly fully-digital antenna arrays, in this paper we investigate how DMAs can be applied for near-field localization of a single user. We use a direct positioning estimation method based on curvature-of-arrival of the impinging wavefront to obtain the user location, and characterize the effects of DMA tuning on the estimation accuracy. Next, we propose an algorithm for configuring the DMA to optimize near-field localization, by first tuning the adjustable DMA coefficients to minimize the estimation error using postulated knowledge of the actual user position. Finally, we propose a sub-optimal iterative algorithm that does not rely on such knowledge. Simulation results show that the DMA-based near-field localization accuracy could approach that of fully-digital arrays at lower cost.
Stochastic control deals with finding an optimal control signal for a dynamical system in a setting with uncertainty, playing a key role in numerous applications. The linear quadratic Gaussian (LQG) is a widely-used setting, where the system dynamics is represented as a linear Gaussian statespace (SS) model, and the objective function is quadratic. For this setting, the optimal controller is obtained in closed form by the separation principle. However, in practice, the underlying system dynamics often cannot be faithfully captured by a fully known linear Gaussian SS model, limiting its performance. Here, we present LQGNet, a stochastic controller that leverages data to operate under partially known dynamics. LQGNet augments the state tracking module of separation-based control with a dedicated trainable algorithm. The resulting system preserves the operation of classic LQG control while learning to cope with partially known SS models without having to fully identify the dynamics. We empirically show that LQGNet outperforms classic stochastic control by overcoming mismatched SS models.
Electrocardiographic signals (ECG) are used in many healthcare applications, including at-home monitoring of vital signs. These applications often rely on wearable technology and provide low quality ECG signals. Although many methods have been proposed for denoising the ECG to boost its quality and enable clinical interpretation, these methods typically fall short for ECG data obtained with wearable technology, because of either their limited tolerance to noise or their limited flexibility to capture ECG dynamics. This paper presents HKF, a hierarchical Kalman filtering method, that leverages a patient-specific learned structured prior of the ECG signal, and integrates it into a state space model to yield filters that capture both intra- and inter-heartbeat dynamics. HKF is demonstrated to outperform previously proposed methods such as the model-based Kalman filter and data-driven autoencoders, in ECG denoising task in terms of mean-squared error, making it a suitable candidate for application in extramural healthcare settings.
Simultaneous localization and mapping (SLAM) is a method that constructs a map of an unknown environment and localizes the position of a moving agent on the map simultaneously. Extended Kalman filter (EKF) has been widely adopted as a low complexity solution for online SLAM, which relies on a motion and measurement model of the moving agent. In practice, however, acquiring precise information about these models is very challenging, and the model mismatch effect causes severe performance loss in SLAM. In this paper, inspired by the recently proposed KalmanNet, we present a robust EKF algorithm using the power of deep learning for online SLAM, referred to as Split-KalmanNet. The key idea of Split-KalmanNet is to compute the Kalman gain using the Jacobian matrix of a measurement function and two recurrent neural networks (RNNs). The two RNNs independently learn the covariance matrices for a prior state estimate and the innovation from data. The proposed split structure in the computation of the Kalman gain allows to compensate for state and measurement model mismatch effects independently. Numerical simulation results verify that Split-KalmanNet outperforms the traditional EKF and the state-of-the-art KalmanNet algorithm in various model mismatch scenarios.
The Kalman filter (KF) is a widely-used algorithm for tracking the latent state of a dynamical system from noisy observations. For systems that are well-described by linear Gaussian state space models, the KF minimizes the mean-squared error (MSE). However, in practice, observations are corrupted by outliers, severely impairing the KFs performance. In this work, an outlier-insensitive KF is proposed, where robustness is achieved by modeling each potential outlier as a normally distributed random variable with unknown variance (NUV). The NUVs variances are estimated online, using both expectation-maximization (EM) and alternating maximization (AM). The former was previously proposed for the task of smoothing with outliers and was adapted here to filtering, while both EM and AM obtained the same performance and outperformed the other algorithms, the AM approach is less complex and thus requires 40 percentage less run-time. Our empirical study demonstrates that the MSE of our proposed outlier-insensitive KF outperforms previously proposed algorithms, and that for data clean of outliers, it reverts to the classic KF, i.e., MSE optimality is preserved
Federated learning (FL) is an emerging paradigm for training machine learning models using possibly private data available at edge devices. The distributed operation of FL gives rise to challenges that are not encountered in centralized machine learning, including the need to preserve the privacy of the local datasets, and the communication load due to the repeated exchange of updated models. These challenges are often tackled individually via techniques that induce some distortion on the updated models, e.g., local differential privacy (LDP) mechanisms and lossy compression. In this work we propose a method coined joint privacy enhancement and quantization (JoPEQ), which jointly implements lossy compression and privacy enhancement in FL settings. In particular, JoPEQ utilizes vector quantization based on random lattice, a universal compression technique whose byproduct distortion is statistically equivalent to additive noise. This distortion is leveraged to enhance privacy by augmenting the model updates with dedicated multivariate privacy preserving noise. We show that JoPEQ simultaneously quantizes data according to a required bit-rate while holding a desired privacy level, without notably affecting the utility of the learned model. This is shown via analytical LDP guarantees, distortion and convergence bounds derivation, and numerical studies. Finally, we empirically assert that JoPEQ demolishes common attacks known to exploit privacy leakage.