Fellow, IEEE
Abstract:This paper analyzes the impact of pilot-sharing scheme on synchronization performance in a scenario where several slave access points (APs) with uncertain carrier frequency offsets (CFOs) and timing offsets (TOs) share a common pilot sequence. First, the Cramer-Rao bound (CRB) with pilot contamination is derived for pilot-pairing estimation. Furthermore, a maximum likelihood algorithm is presented to estimate the CFO and TO among the pairing APs. Then, to minimize the sum of CRBs, we devise a synchronization strategy based on a pilot-sharing scheme by jointly optimizing the cluster classification, synchronization overhead, and pilot-sharing scheme, while simultaneously considering the overhead and each AP's synchronization requirements. To solve this NP-hard problem, we simplify it into two sub-problems, namely cluster classification problem and the pilot sharing problem. To strike a balance between synchronization performance and overhead, we first classify the clusters by using the K-means algorithm, and propose a criteria to find a good set of master APs. Then, the pilot-sharing scheme is obtained by using the swap-matching operations. Simulation results validate the accuracy of our derivations and demonstrate the effectiveness of the proposed scheme over the benchmark schemes.
Abstract:Combining millimetre-wave (mmWave) communications with an extremely large-scale antenna array (ELAA) presents a promising avenue for meeting the spectral efficiency demands of the future sixth generation (6G) mobile communications. However, beam training for mmWave ELAA systems is challenged by excessive pilot overheads as well as insufficient accuracy, as the huge near-field codebook has to be accounted for. In this paper, inspired by the similarity between far-field sub-6 GHz channels and near-field mmWave channels, we propose to leverage sub-6 GHz uplink pilot signals to directly estimate the optimal near-field mmWave codeword, which aims to reduce pilot overhead and bypass the channel estimation. Moreover, we adopt deep learning to perform this dual mapping function, i.e., sub-6 GHz to mmWave, far-field to near-field, and a novel neural network structure called NMBEnet is designed to enhance the precision of beam training. Specifically, when considering the orthogonal frequency division multiplexing (OFDM) communication scenarios with high user density, correlations arise both between signals from different users and between signals from different subcarriers. Accordingly, the convolutional neural network (CNN) module and graph neural network (GNN) module included in the proposed NMBEnet can leverage these two correlations to further enhance the precision of beam training.
Abstract:This letter considers an active reconfigurable intelligent surface (RIS)-aided multi-user uplink massive multipleinput multiple-output (MIMO) system with low-resolution analog-to-digital converters (ADCs). The letter derives the closedform approximate expression for the sum achievable rate (AR), where the maximum ratio combination (MRC) processing and low-resolution ADCs are applied at the base station. The system performance is analyzed, and a genetic algorithm (GA)-based method is proposed to optimize the RIS's phase shifts for enhancing the system performance. Numerical results verify the accuracy of the derivations, and demonstrate that the active RIS has an evident performance gain over the passive RIS.
Abstract:In vehicle edge computing (VEC), asynchronous federated learning (AFL) is used, where the edge receives a local model and updates the global model, effectively reducing the global aggregation latency.Due to different amounts of local data,computing capabilities and locations of the vehicles, renewing the global model with same weight is inappropriate.The above factors will affect the local calculation time and upload time of the local model, and the vehicle may also be affected by Byzantine attacks, leading to the deterioration of the vehicle data. However, based on deep reinforcement learning (DRL), we can consider these factors comprehensively to eliminate vehicles with poor performance as much as possible and exclude vehicles that have suffered Byzantine attacks before AFL. At the same time, when aggregating AFL, we can focus on those vehicles with better performance to improve the accuracy and safety of the system. In this paper, we proposed a vehicle selection scheme based on DRL in VEC. In this scheme, vehicle s mobility, channel conditions with temporal variations, computational resources with temporal variations, different data amount, transmission channel status of vehicles as well as Byzantine attacks were taken into account.Simulation results show that the proposed scheme effectively improves the safety and accuracy of the global model.
Abstract:This article introduces an energy and spectral efficient multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) transmission scheme designed for the future sixth generation (6G) wireless communication networks. The approach involves connecting each receiving radio frequency (RF) chain with multiple antenna elements and conducting sample-level adjustments for receiving beamforming patterns. The proposed system architecture and the dedicated signal processing methods enable the scheme to transmit a bigger number of parallel data streams than the number of receiving RF chains, achieving a spectral efficiency performance close to that of a fully digital (FD) MIMO system with the same number of antenna elements, each equipped with an RF chain. We refer to this system as a ''pseudo MIMO'' system due to its ability to mimic the functionality of additional invisible RF chains. The article begins with introducing the underlying principles of pseudo MIMO and discussing potential hardware architectures for its implementation. We then highlight several advantages of integrating pseudo MIMO into next-generation wireless networks. To demonstrate the superiority of our proposed pseudo MIMO transmission scheme to conventional MIMO systems, simulation results are presented. Additionally, we validate the feasibility of this new scheme by building the first pseudo MIMO prototype. Furthermore, we present some key challenges and outline potential directions for future research.
Abstract:Reconfigurable intelligent surface (RIS)-aided localization systems have attracted extensive research attention due to their accuracy enhancement capabilities. However, most studies primarily utilized the base stations (BS) received signal, i.e., BS information, for localization algorithm design, neglecting the potential of RIS received signal, i.e., RIS information. Compared with BS information, RIS information offers higher dimension and richer feature set, thereby significantly improving the ability to extract positions of the mobile users (MUs). Addressing this oversight, this paper explores the algorithm design based on the high-dimensional RIS information. Specifically, we first propose a RIS information reconstruction (RIS-IR) algorithm to reconstruct the high-dimensional RIS information from the low-dimensional BS information. The proposed RIS-IR algorithm comprises a data processing module for preprocessing BS information, a convolution neural network (CNN) module for feature extraction, and an output module for outputting the reconstructed RIS information. Then, we propose a transfer learning based fingerprint (TFBF) algorithm that employs the reconstructed high-dimensional RIS information for MU localization. This involves adapting a pre-trained DenseNet-121 model to map the reconstructed RIS signal to the MU's three-dimensional (3D) position. Empirical results affirm that the localization performance is significantly influenced by the high-dimensional RIS information and maintains robustness against unoptimized phase shifts.
Abstract:In this paper, we consider an reconfigurable intelligent surface (RIS)-aided frequency division duplex (FDD) massive multiple-input multiple-output (MIMO) downlink system.In the FDD systems, the downlink channel state information (CSI) should be sent to the base station through the feedback link. However, the overhead of CSI feedback occupies substantial uplink bandwidth resources in RIS-aided communication systems. In this work, we propose a deep learning (DL)-based scheme to reduce the overhead of CSI feedback by compressing the cascaded CSI. In the practical RIS-aided communication systems, the cascaded channel at the adjacent slots inevitably has time correlation. We use long short-term memory to learn time correlation, which can help the neural network to improve the recovery quality of the compressed CSI. Moreover, the attention mechanism is introduced to further improve the CSI recovery quality. Simulation results demonstrate that our proposed DLbased scheme can significantly outperform other DL-based methods in terms of the CSI recovery quality
Abstract:Existing works mainly rely on the far-field planar-wave-based channel model to assess the performance of reconfigurable intelligent surface (RIS)-enabled wireless communication systems. However, when the transmitter and receiver are in near-field ranges, this will result in relatively low computing accuracy. To tackle this challenge, we initially develop an analytical framework for sub-array partitioning. This framework divides the large-scale RIS array into multiple sub-arrays, effectively reducing modeling complexity while maintaining acceptable accuracy. Then, we develop a beam domain channel model based on the proposed sub-array partition framework for large-scale RIS-enabled UAV-to-vehicle communication systems, which can be used to efficiently capture the sparse features in RIS-enabled UAV-to-vehicle channels in both near-field and far-field ranges. Furthermore, some important propagation characteristics of the proposed channel model, including the spatial cross-correlation functions (CCFs), temporal auto-correlation functions (ACFs), frequency correlation functions (CFs), and channel capacities with respect to the different physical features of the RIS and non-stationary properties of the channel model are derived and analyzed. Finally, simulation results are provided to demonstrate that the proposed framework is helpful to achieve a good tradeoff between model complexity and accuracy for investigating the channel propagation characteristics, and therefore providing highly-efficient communications in RIS-enabled UAV-to-vehicle wireless networks.
Abstract:In Internet of Things (IoT) networks, the amount of data sensed by user devices may be huge, resulting in the serious network congestion. To solve this problem, intelligent data compression is critical. The variational information bottleneck (VIB) approach, combined with machine learning, can be employed to train the encoder and decoder, so that the required transmission data size can be reduced significantly. However, VIB suffers from the computing burden and network insecurity. In this paper, we propose a blockchain-enabled VIB (BVIB) approach to relieve the computing burden while guaranteeing network security. Extensive simulations conducted by Python and C++ demonstrate that BVIB outperforms VIB by 36%, 22% and 57% in terms of time and CPU cycles cost, mutual information, and accuracy under attack, respectively.
Abstract:In this paper, we consider the time-varying channel estimation in millimeter wave (mmWave) multiple-input multiple-output MIMO systems with hybrid beamforming architectures. Different from the existing contributions that considered single-carrier mmWave systems with high mobility, the wideband orthogonal frequency division multiplexing (OFDM) system is considered in this work. To solve the channel estimation problem under channel double selectivity, we propose a pilot transmission scheme based on 5G OFDM, and the received signals are formed as a fourth-order tensor, which fits the low-rank CANDECOMP/PARAFAC (CP) model. By further exploring the Vandermonde structure of factor matrix, a tensor-subspace decomposition based channel estimation method is proposed to solve the CP decomposition, where the uniqueness condition is analyzed. Based on the decomposed factor matrices, the channel parameters, including angles of arrival/departure, delays, channel gains and Doppler shifts are estimated, and the Cram\'{e}r-Rao bound (CRB) results are derived as performance metrics. Simulation results demonstrate the superior performance of the proposed method over other benchmarks. Furthermore, the channel estimation methods are tested based on the channel parameters generated by Wireless InSites, and simulation results show the effectiveness of the proposed method in practical scenarios.