This work proposes a novel channel estimator based on diffusion models (DMs), one of the currently top-rated generative models. Contrary to related works utilizing generative priors, a lightweight convolutional neural network (CNN) with positional embedding of the signal-to-noise ratio (SNR) information is designed by learning the channel distribution in the sparse angular domain. Combined with an estimation strategy that avoids stochastic resampling and truncates reverse diffusion steps that account for lower SNR than the given pilot observation, the resulting DM estimator has both low complexity and memory overhead. Numerical results exhibit better performance than state-of-the-art channel estimators utilizing generative priors.
Diffusion probabilistic models (DPMs) have recently shown great potential for denoising tasks. Despite their practical utility, there is a notable gap in their theoretical understanding. This paper contributes novel theoretical insights by rigorously proving the asymptotic convergence of a specific DPM denoising strategy to the mean square error (MSE)-optimal conditional mean estimator (CME) over a large number of diffusion steps. The studied DPM-based denoiser shares the training procedure of DPMs but distinguishes itself by forwarding only the conditional mean during the reverse inference process after training. We highlight the unique perspective that DPMs are composed of an asymptotically optimal denoiser while simultaneously inheriting a powerful generator by switching re-sampling in the reverse process on and off. The theoretical findings are validated by numerical results.
This work utilizes a variational autoencoder for channel estimation and evaluates it on real-world measurements. The estimator is trained solely on noisy channel observations and parameterizes an approximation to the mean squared error-optimal estimator by learning observation-dependent conditional first and second moments. The proposed estimator significantly outperforms related state-of-the-art estimators on real-world measurements. We investigate the effect of pre-training with synthetic data and find that the proposed estimator exhibits comparable results to the related estimators if trained on synthetic data and evaluated on the measurement data. Furthermore, pre-training on synthetic data also helps to reduce the required measurement training dataset size.
When only few data samples are accessible, utilizing structural prior knowledge is essential for estimating covariance matrices and their inverses. One prominent example is knowing the covariance matrix to be Toeplitz structured, which occurs when dealing with wide sense stationary (WSS) processes. This work introduces a novel class of positive definiteness ensuring likelihood-based estimators for Toeplitz structured covariance matrices (CMs) and their inverses. In order to accomplish this, we derive positive definiteness enforcing constraint sets for the Gohberg-Semencul (GS) parameterization of inverse symmetric Toeplitz matrices. Motivated by the relationship between the GS parameterization and autoregressive (AR) processes, we propose hyperparameter tuning techniques, which enable our estimators to combine advantages from state-of-the-art likelihood and non-parametric estimators. Moreover, we present a computationally cheap closed-form estimator, which is derived by maximizing an approximate likelihood. Due to the ensured positive definiteness, our estimators perform well for both the estimation of the CM and the inverse covariance matrix (ICM). Extensive simulation results validate the proposed estimators' efficacy for several standard Toeplitz structured CMs commonly employed in a wide range of applications.
In this work, we propose to utilize a variational autoencoder (VAE) for channel estimation (CE) in underdetermined (UD) systems. The basis of the method forms a recently proposed concept in which a VAE is trained on channel state information (CSI) data and used to parameterize an approximation to the mean squared error (MSE)-optimal estimator. The contributions in this work extend the existing framework from fully-determined (FD) to UD systems, which are of high practical relevance. Particularly noteworthy is the extension of the estimator variant, which does not require perfect CSI during its offline training phase. This is a significant advantage compared to most other deep learning (DL)-based CE methods, where perfect CSI during the training phase is a crucial prerequisite. Numerical simulations for hybrid and wideband systems demonstrate the excellent performance of the proposed methods compared to related estimators.
In this manuscript, we propose to utilize the generative neural network-based variational autoencoder for channel estimation. The variational autoencoder models the underlying true but unknown channel distribution as a conditional Gaussian distribution in a novel way. The derived channel estimator exploits the internal structure of the variational autoencoder to parameterize an approximation of the mean squared error optimal estimator resulting from the conditional Gaussian channel models. We provide a rigorous analysis under which conditions a variational autoencoder-based estimator is mean squared error optimal. We then present considerations that make the variational autoencoder-based estimator practical and propose three different estimator variants that differ in their access to channel knowledge during the training and evaluation phase. In particular, the proposed estimator variant trained solely on noisy pilot observations is particularly noteworthy as it does not require access to noise-free, ground-truth channel data during training or evaluation. Extensive numerical simulations first analyze the internal behavior of the variational autoencoder-based estimators and then demonstrate excellent channel estimation performance compared to related classical and machine learning-based state-of-the-art channel estimators.
In this work, we consider the use of a model-based decoder in combination with an unsupervised learning strategy for direction-of-arrival (DoA) estimation. Relying only on unlabeled training data we show in our analysis that we can outperform existing unsupervised machine learning methods and classical methods. This is done by introducing a model-based decoder in an autoencoder architecture with leads to a meaningful representation of the statistical model in the latent space. Our numerical simulation show that the performance of the presented approach is not affected by correlated signals but rather improves slightly. This is due to the fact, that we propose the estimation of the correlation parameters simultaneously to the DoA estimation.
One way to improve the estimation of time varying channels is to incorporate knowledge of previous observations. In this context, Dynamical VAEs (DVAEs) build a promising deep learning (DL) framework which is well suited to learn the distribution of time series data. We introduce a new DVAE architecture, called k-MemoryMarkovVAE (k-MMVAE), whose sparsity can be controlled by an additional memory parameter. Following the approach in [1] we derive a k-MMVAE aided channel estimator which takes temporal correlations of successive observations into account. The results are evaluated on simulated channels by QuaDRiGa and show that the k-MMVAE aided channel estimator clearly outperforms other machine learning (ML) aided estimators which are either memoryless or naively extended to time varying channels without major adaptions.
Classical methods for model order selection often fail in scenarios with low SNR or few snapshots. Deep learning based methods are promising alternatives for such challenging situations as they compensate lack of information in observations with repeated training on large datasets. This manuscript proposes an approach that uses a variational autoencoder (VAE) for model order selection. The idea is to learn a parameterized conditional covariance matrix at the VAE decoder that approximates the true signal covariance matrix. The method itself is unsupervised and only requires a small representative dataset for calibration purposes after training of the VAE. Numerical simulations show that the proposed method clearly outperforms classical methods and even reaches or beats a supervised approach depending on the considered snapshots.
We propose to utilize a variational autoencoder (VAE) for data-driven channel estimation. The underlying true and unknown channel distribution is modeled by the VAE as a conditional Gaussian distribution in a novel way, parameterized by the respective first and second order conditional moments. As a result, it can be observed that the linear minimum mean square error (LMMSE) estimator in its variant conditioned on the latent sample of the VAE approximates an optimal MSE estimator. Furthermore, we argue how a VAE-based channel estimator can approximate the MMSE channel estimator. We propose three variants of VAE estimators that differ in the data used during training and estimation. First, we show that given perfectly known channel state information at the input of the VAE during estimation, which is impractical, we obtain an estimator that can serve as a benchmark result for an estimation scenario. We then propose practically feasible approaches, where perfectly known channel state information is only necessary in the training phase or is not needed at all. Simulation results on 3GPP and QuaDRiGa channel data attest a small performance loss of the practical approaches and the superiority of our VAE approaches in comparison to other related channel estimation methods.