Fellow, IEEE
Abstract:In Internet of Things (IoT) networks, the amount of data sensed by user devices may be huge, resulting in the serious network congestion. To solve this problem, intelligent data compression is critical. The variational information bottleneck (VIB) approach, combined with machine learning, can be employed to train the encoder and decoder, so that the required transmission data size can be reduced significantly. However, VIB suffers from the computing burden and network insecurity. In this paper, we propose a blockchain-enabled VIB (BVIB) approach to relieve the computing burden while guaranteeing network security. Extensive simulations conducted by Python and C++ demonstrate that BVIB outperforms VIB by 36%, 22% and 57% in terms of time and CPU cycles cost, mutual information, and accuracy under attack, respectively.
Abstract:In this paper, we consider the time-varying channel estimation in millimeter wave (mmWave) multiple-input multiple-output MIMO systems with hybrid beamforming architectures. Different from the existing contributions that considered single-carrier mmWave systems with high mobility, the wideband orthogonal frequency division multiplexing (OFDM) system is considered in this work. To solve the channel estimation problem under channel double selectivity, we propose a pilot transmission scheme based on 5G OFDM, and the received signals are formed as a fourth-order tensor, which fits the low-rank CANDECOMP/PARAFAC (CP) model. By further exploring the Vandermonde structure of factor matrix, a tensor-subspace decomposition based channel estimation method is proposed to solve the CP decomposition, where the uniqueness condition is analyzed. Based on the decomposed factor matrices, the channel parameters, including angles of arrival/departure, delays, channel gains and Doppler shifts are estimated, and the Cram\'{e}r-Rao bound (CRB) results are derived as performance metrics. Simulation results demonstrate the superior performance of the proposed method over other benchmarks. Furthermore, the channel estimation methods are tested based on the channel parameters generated by Wireless InSites, and simulation results show the effectiveness of the proposed method in practical scenarios.
Abstract:Orthogonal time frequency space (OTFS) modulation, a delay-Doppler (DD) domain communication scheme exhibiting strong robustness against the Doppler shifts, has the potentials to be employed in LEO satellite communications. However, the performance comparison with the orthogonal frequency division multiplexing (OFDM) modulation and the resource allocation scheme for multiuser OTFS-based LEO satellite communication system have rarely been investigated. In this paper, we conduct a performance comparison under various channel conditions between the OTFS and OFDM modulations, encompassing evaluations of sum-rate and bit error ratio (BER). Additionally, we investigate the joint optimal allocation of power and delay-Doppler resource blocks aiming at maximizing sum-rate for multiuser downlink OTFS-based LEO satellite communication systems. Unlike the conventional modulations relaying on complex input-output relations within the Time-Frequency (TF) domain, the OTFS modulation exploits both time and frequency diversities, i.e., delay and Doppler shifts remain constant during a OTFS frame, which facilitates a DD domain input-output simple relation for our investigation. We transform the resulting non-convex and combinatorial optimization problem into an equivalent difference of convex problem by decoupling the conditional constraints, and solve the transformed problem via penalty convex-concave procedure algorithm. Simulation results demonstrate that the OTFS modulation is robust to carrier frequency offsets (CFO) caused by high-mobility of LEO satellites, and has superior performance to the OFDM modulation. Moreover, numerical results indicate that our proposed resource allocation scheme has higher sum-rate than existed schemes for the OTFS modulation, such as delay divided multiple access and Doppler divided multiple access, especially in the high signal-to-noise ratio (SNR) regime.
Abstract:The integration of sensing capabilities into communication systems, by sharing physical resources, has a significant potential for reducing spectrum, hardware, and energy costs while inspiring innovative applications. Cooperative networks, in particular, are expected to enhance sensing services by enlarging the coverage area and enriching sensing measurements, thus improving the service availability and accuracy. This paper proposes a cooperative integrated sensing and communication (ISAC) framework by leveraging information-carrying orthogonal frequency division multiplexing (OFDM) signals transmitted by access points (APs). Specifically, we propose a two-stage scheme for target localization, where communication signals are reused as sensing reference signals based on the system information shared at the central processing unit (CPU). In Stage I, we measure the ranges of scattered paths induced by targets, through the extraction of time-delay information from the received signals at APs. Then, the target locations are estimated in Stage II based on these range measurements. Considering that the scattered paths corresponding to some targets may not be detectable by all APs, we propose an effective algorithm to match the range measurements with the targets and achieve the target location estimation. Notably, by analyzing the OFDM numerologies defined in fifth generation (5G) standards, we elucidate the flexibility and consistency of performance trade-offs in both communication and sensing aspects. Finally, numerical results confirm the effectiveness of our sensing scheme and the cooperative gain of the ISAC framework.
Abstract:Ultra-densely deploying access points (APs) to support the increasing data traffic would significantly escalate the cell-edge problem resulting from traditional cellular networks. By removing the cell boundaries and coordinating all APs for joint transmission, the cell-edge problem can be alleviated, which in turn leads to unaffordable system complexity and channel measurement overhead. A new scalable clustered cell-free network architecture has been proposed recently, under which the large-scale network is flexibly partitioned into a set of independent subnetworks operating parallelly. In this paper, we study the energy-efficient clustered cell-free networking problem with AP selection. Specifically, we propose a user-centric ratio-fixed AP-selection based clustering (UCR-ApSel) algorithm to form subnetworks dynamically. Following this, we analyze the average energy efficiency achieved with the proposed UCR-ApSel scheme theoretically and derive an effective closed-form upper-bound. Based on the analytical upper-bound expression, the optimal AP-selection ratio that maximizes the average energy efficiency is further derived as a simple explicit function of the total number of APs and the number of subnetworks. Simulation results demonstrate the effectiveness of the derived optimal AP-selection ratio and show that the proposed UCR-ApSel algorithm with the optimal AP-selection ratio achieves around 40% higher energy efficiency than the baselines. The analysis provides important insights to the design and optimization of future ultra-dense wireless communication systems.
Abstract:Extremely large-scale multiple-input multiple-output (XL-MIMO) systems are capable of improving spectral efficiency by employing far more antennas than conventional massive MIMO at the base station (BS). However, beam training in multiuser XL-MIMO systems is challenging. To tackle these issues, we conceive a three-phase graph neural network (GNN)-based beam training scheme for multiuser XL-MIMO systems. In the first phase, only far-field wide beams have to be tested for each user and the GNN is utilized to map the beamforming gain information of the far-field wide beams to the optimal near-field beam for each user. In addition, the proposed GNN-based scheme can exploit the position-correlation between adjacent users for further improvement of the accuracy of beam training. In the second phase, a beam allocation scheme based on the probability vectors produced at the outputs of GNNs is proposed to address the above beam-direction conflicts between users. In the third phase, the hybrid TBF is designed for further reducing the inter-user interference. Our simulation results show that the proposed scheme improves the beam training performance of the benchmarks. Moreover, the performance of the proposed beam training scheme approaches that of an exhaustive search, despite requiring only about 7% of the pilot overhead.
Abstract:In this paper, we consider an active reconfigurable intelligent surface (RIS) to assist the multiuser downlink transmission in the presence of practical hardware impairments (HWIs), including the HWIs at the transceivers and the phase noise at the active RIS. The active RIS is deployed to amplify the incident signals to alleviate the multiplicative fading effect, which is a limitation in the conventional passive RIS-aided wireless systems. We aim to maximize the sum rate through jointly designing the transmit beamforming at the base station (BS), the amplification factors and the phase shifts at the active RIS. To tackle this challenging optimization problem effectively, we decouple it into two tractable subproblems. Subsequently, each subproblem is transformed into a second order cone programming problem. The block coordinate descent framework is applied to tackle them, where the transmit beamforming and the reflection coefficients are alternately designed. In addition, another efficient algorithm is presented to reduce the computational complexity. Specifically, by exploiting the majorization-minimization approach, each subproblem is reformulated into a tractable surrogate problem, whose closed-form solutions are obtained by Lagrange dual decomposition approach and element-wise alternating sequential optimization method. Simulation results validate the effectiveness of our developed algorithms, and reveal that the HWIs significantly limit the system performance of active RIS-empowered wireless communications. Furthermore, the active RIS noticeably boosts the sum rate under the same total power budget, compared with the passive RIS.
Abstract:A cell-free network merged with active reconfigurable reflecting surfaces (RIS) is investigated in this paper. Based on the imperfect channel state information (CSI), the aggregated channel from the user to the access point (AP) is initially estimated using the linear minimum mean square error (LMMSE) technique. The central processing unit (CPU) then detects uplink data from individual users through the utilization of the maximum ratio combining (MRC) approach, relying on the estimated channel. Then, a closed-form expression for uplink spectral efficiency (SE) is derived which demonstrates its reliance on statistical CSI (S-CSI) alone. The amplitude gain of each active RIS element is derived in a closed-form expression as a function of the number of active RIS elements, the number of users, and the size of each reflecting element. A soft actor-critic (SAC) algorithm is utilized to design the phase shift of the active RIS to maximize the uplink SE. Simulation results emphasize the robustness of the proposed SAC algorithm, showcasing its effectiveness in cell-free networks under the influence of imperfect CSI.
Abstract:In this paper, we investigate a cascaded channel estimation method for a millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) system aided by a reconfigurable intelligent surface (RIS) with the BS equipped with low-resolution analog-to-digital converters (ADCs), where the BS and the RIS are both equipped with a uniform planar array (UPA). Due to the sparse property of mmWave channel, the channel estimation can be solved as a compressed sensing (CS) problem. However, the low-resolution quantization cause severe information loss of signals, and traditional CS algorithms are unable to work well. To recovery the signal and the sparse angular domain channel from quantization, we introduce Bayesian inference and efficient vector approximate message passing (VAMP) algorithm to solve the quantize output CS problem. To further improve the efficiency of the VAMP algorithm, a Fast Fourier Transform (FFT) based fast computation method is derived. Simulation results demonstrate the effectiveness and the accuracy of the proposed cascaded channel estimation method for the RIS-aided mmWave massive MIMO system with few-bit ADCs. Furthermore, the proposed channel estimation method can reach an acceptable performance gap between the low-resolution ADCs and the infinite ADCs for the low signal-to-noise ratio (SNR), which implies the applicability of few-bit ADCs in practice.
Abstract:Dual-functional radar-communication (DFRC) has attracted considerable attention. This paper considers the frequency-selective multipath fading environment and proposes DFRC waveform design strategies based on multiple-input and multiple-output (MIMO) and orthogonal frequency division multiplexing (OFDM) techniques. In the proposed waveform design strategies, the Cramer-Rao bound (CRB) of the radar system, the inter-stream interference (ISI) and the achievable rate of the communication system, are respectively considered as the performance metrics. In this paper, we focus on the performance trade-off between the radar system and the communication system, and the optimization problems are formulated. In the ISI minimization based waveform design strategy, the optimization problem is convex and can be easily solved. In the achievable rate maximization based waveform design strategy, we propose a water-filling (WF) and sequential quadratic programming (SQP) based algorithm to derive the covariance matrix and the precoding matrix. Simulation results validate the proposed DFRC waveform designs and show that the achievable rate maximization based strategy has a better performance than the ISI minimization based strategy.