Channel state information (CSI) is important to reap the full benefits of millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems. The traditional channel estimation methods using pilot frames (PF) lead to excessive overhead. To reduce the demand for PF, data frames (DF) can be adopted for joint channel estimation and data recovery. However, the computational complexity of the DF-based methods is prohibitively high. To reduce the computational complexity, we propose a joint channel estimation and data recovery (JCD) method assisted by a small number of PF for mmWave massive MIMO systems. The proposed method has two stages. In Stage 1, differing from the traditional PF-based methods, the proposed PF-assisted method is utilized to capture the angle of arrival (AoA) of principal components (PC) of channels. In Stage 2, JCD is designed for parallel implementation based on the multi-user decoupling strategy. The theoretical analysis demonstrates that the PF-assisted JCD method can achieve equivalent performance to the Bayesian-optimal DF-based method, while greatly reducing the computational complexity. Simulation results are also presented to validate the analytical results.
Deep neural networks have been widely used in communication signal recognition and achieved remarkable performance, but this superiority typically depends on using massive examples for supervised learning, whereas training a deep neural network on small datasets with few labels generally falls into overfitting, resulting in degenerated performance. To this end, we develop a semi-supervised learning (SSL) method that effectively utilizes a large collection of more readily available unlabeled signal data to improve generalization. The proposed method relies largely on a novel implementation of consistency-based regularization, termed Swapped Prediction, which leverages strong data augmentation to perturb an unlabeled sample and then encourage its corresponding model prediction to be close to its original, optimized with a scaled cross-entropy loss with swapped symmetry. Extensive experiments indicate that our proposed method can achieve a promising result for deep SSL of communication signal recognition.
Deep learning has been widely used in radio frequency (RF) fingerprinting. Despite its excellent performance, most existing methods only consider a closed-set assumption, which cannot effectively tackle signals emitted from those unknown devices that have never been seen during training. In this letter, we exploit prototype learning for open-set RF fingerprinting and propose two improvements, including consistency-based regularization and online label smoothing, which aim to learn a more robust feature space. Experimental results on a real-world RF dataset demonstrate that our proposed measures can significantly improve prototype learning to achieve promising open-set recognition performance for RF fingerprinting.
As a revolutionary generative paradigm of deep learning, generative adversarial networks (GANs) have been widely applied in various fields to synthesize realistic data. However, it is challenging for conventional GANs to synthesize raw signal data, especially in some complex cases. In this paper, we develop a novel GAN framework for radio generation called "Radio GAN". Compared to conventional methods, it benefits from three key improvements. The first is learning based on sampling points, which aims to model an underlying sampling distribution of radio signals. The second is an unrolled generator design, combined with an estimated pure signal distribution as a prior, which can greatly reduce learning difficulty and effectively improve learning precision. Finally, we present an energy-constrained optimization algorithm to achieve better training stability and convergence. Experimental results with extensive simulations demonstrate that our proposed GAN framework can effectively learn transmitter characteristics and various channel effects, thus accurately modeling for an underlying sampling distribution to synthesize radio signals of high quality.
As a promising non-password authentication technology, radio frequency (RF) fingerprinting can greatly improve wireless security. Recent work has shown that RF fingerprinting based on deep learning can significantly outperform conventional approaches. The superiority, however, is mainly attributed to supervised learning using a large amount of labeled data, and it significantly degrades if only limited labeled data is available, making many existing algorithms lack practicability. Considering that it is often easier to obtain enough unlabeled data in practice with minimal resources, we leverage deep semi-supervised learning for RF fingerprinting, which largely relies on a composite data augmentation scheme designed for radio signals, combined with two popular techniques: consistency-based regularization and pseudo-labeling. Experimental results on both simulated and real-world datasets demonstrate that our proposed method for semi-supervised RF fingerprinting is far superior to other competing ones, and it can achieve remarkable performance almost close to that of fully supervised learning with a very limited number of examples.
Decentralized federated learning (DFL) is a variant of federated learning, where edge nodes only communicate with their one-hop neighbors to learn the optimal model. However, as information exchange is restricted in a range of one-hop in DFL, inefficient information exchange leads to more communication rounds to reach the targeted training loss. This greatly reduces the communication efficiency. In this paper, we propose a new non-uniform quantization of model parameters to improve DFL convergence. Specifically, we apply the Lloyd-Max algorithm to DFL (LM-DFL) first to minimize the quantization distortion by adjusting the quantization levels adaptively. Convergence guarantee of LM-DFL is established without convex loss assumption. Based on LM-DFL, we then propose a new doubly-adaptive DFL, which jointly considers the ascending number of quantization levels to reduce the amount of communicated information in the training and adapts the quantization levels for non-uniform gradient distributions. Experiment results based on MNIST and CIFAR-10 datasets illustrate the superiority of LM-DFL with the optimal quantized distortion and show that doubly-adaptive DFL can greatly improve communication efficiency.
The computational prediction of wave propagation in dam-break floods is a long-standing problem in hydrodynamics and hydrology. Until now, conventional numerical models based on Saint-Venant equations are the dominant approaches. Here we show that a machine learning model that is well-trained on a minimal amount of data, can help predict the long-term dynamic behavior of a one-dimensional dam-break flood with satisfactory accuracy. For this purpose, we solve the Saint-Venant equations for a one-dimensional dam-break flood scenario using the Lax-Wendroff numerical scheme and train the reservoir computing echo state network (RC-ESN) with the dataset by the simulation results consisting of time-sequence flow depths. We demonstrate a good prediction ability of the RC-ESN model, which ahead predicts wave propagation behavior 286 time-steps in the dam-break flood with a root mean square error (RMSE) smaller than 0.01, outperforming the conventional long short-term memory (LSTM) model which reaches a comparable RMSE of only 81 time-steps ahead. To show the performance of the RC-ESN model, we also provide a sensitivity analysis of the prediction accuracy concerning the key parameters including training set size, reservoir size, and spectral radius. Results indicate that the RC-ESN are less dependent on the training set size, a medium reservoir size K=1200~2600 is sufficient. We confirm that the spectral radius \r{ho} shows a complex influence on the prediction accuracy and suggest a smaller spectral radius \r{ho} currently. By changing the initial flow depth of the dam break, we also obtained the conclusion that the prediction horizon of RC-ESN is larger than that of LSTM.
In time-division duplexing (TDD) millimeter-wave (mmWave) massive multiple-input multiple-output (MIMO) systems, the reciprocity mismatch severely degrades the performance of the hybrid beamforming (HBF). In this work, to mitigate the detrimental effect of the reciprocity mismatch, we investigate reciprocity calibration for the mmWave-HBF system with a fully-connected phase shifter network. To reduce the overhead and computational complexity of reciprocity calibration, we first decouple digital radio frequency (RF) chains and analog RF chains with beamforming design. Then, the entire calibration problem of the HBF system is equivalently decomposed into two subproblems corresponding to the digital-chain calibration and analog-chain calibration. To solve the calibration problems efficiently, a closed-form solution to the digital-chain calibration problem is derived, while an iterative-alternating optimization algorithm for the analog-chain calibration problem is proposed. To measure the performance of the proposed algorithm, we derive the Cram\'er-Rao lower bound on the errors in estimating mismatch coefficients. The results reveal that the estimation errors of mismatch coefficients of digital and analog chains are uncorrelated, and that the mismatch coefficients of receive digital chains can be estimated perfectly. Simulation results are presented to validate the analytical results and to show the performance of the proposed calibration approach.
Millimeter-wave (mmWave) massive multiple-input multiple-output (MIMO) systems rely on large-scale antenna arrays to combat large path-loss at mmWave band. Due to hardware characteristics and deploying environments, mmWave massive MIMO systems are vulnerable to antenna element blockages and failures, which necessitate diagnostic techniques to locate faulty antenna elements for calibration purposes. Current diagnostic techniques require full or partial knowledge of channel state information (CSI), which can be challenging to acquire in the presence of antenna failures. In this letter, we propose a blind diagnostic technique to identify faulty antenna elements in mmWave massive MIMO systems, which does not require any CSI knowledge. By jointly exploiting the sparsity of mmWave channel and failure, we first formulate the diagnosis problem as a joint sparse recovery problem. Then, the atomic norm is introduced to induce the sparsity of mmWave channel over continuous Fourier dictionary. An efficient algorithm based on alternating direction method of multipliers (ADMM) is proposed to solve the proposed problem. Finally, the performance of proposed technique is evaluated through numerical simulations.
Intelligent reflecting surface (IRS) is a promising technology for enhancing wireless communication systems, which adaptively configures massive passive reflecting elements to control wireless channel in a desirable way. Due to hardware characteristics and deploying environments, the IRS may be subject to reflecting element blockages and failures, and hence developing diagnostic techniques is of great significance to system monitoring and maintenance. In this paper, we develop diagnostic techniques for IRS systems to locate faulty reflecting elements and retrieve failure parameters. Three cases of the channel state information (CSI) availability are considered. In the first case where full CSI is available, a compressed sensing based diagnostic technique is proposed, which significantly reduces the required number of measurements. In the second case where only partial CSI is available, we jointly exploit the sparsity of the millimeter-wave channel and the failure, and adopt compressed sparse and low-rank matrix recovery algorithm to decouple channel and failure. In the third case where no CSI is available, a novel atomic norm is introduced as the sparsity-inducing norm of the cascaded channel, and the diagnosis problem is formulated as a joint sparse recovery problem. Finally, proposed diagnostic techniques are validated through numerical simulations.