The explosive growth of dynamic and heterogeneous data traffic brings great challenges for 5G and beyond mobile networks. To enhance the network capacity and reliability, we propose a learning-based dynamic time-frequency division duplexing (D-TFDD) scheme that adaptively allocates the uplink and downlink time-frequency resources of base stations (BSs) to meet the asymmetric and heterogeneous traffic demands while alleviating the inter-cell interference. We formulate the problem as a decentralized partially observable Markov decision process (Dec-POMDP) that maximizes the long-term expected sum rate under the users' packet dropping ratio constraints. In order to jointly optimize the global resources in a decentralized manner, we propose a federated reinforcement learning (RL) algorithm named federated Wolpertinger deep deterministic policy gradient (FWDDPG) algorithm. The BSs decide their local time-frequency configurations through RL algorithms and achieve global training via exchanging local RL models with their neighbors under a decentralized federated learning framework. Specifically, to deal with the large-scale discrete action space of each BS, we adopt a DDPG-based algorithm to generate actions in a continuous space, and then utilize Wolpertinger policy to reduce the mapping errors from continuous action space back to discrete action space. Simulation results demonstrate the superiority of our proposed algorithm to benchmark algorithms with respect to system sum rate.
Due to the ability to reshape the wireless communication environment in a cost- and energy-efficient manner, the reconfigurable intelligent surface (RIS) has garnered substantial attention. However, the explicit power consumption model of RIS and measurement validation, have received far too little attention. Therefore, in this work, we propose the RIS power consumption model and implement the practical measurement validation with various RISs. Measurement results illustrate the generality and accuracy of the proposed model. Firstly, we verify that RIS has static power consumption, and present the experiment results. Secondly, we confirm that the dynamic power consumption of the varactor-diode based RIS is almost negligible. Finally but significantly, we model the quantitative relationship between the dynamic power consumption of the PIN-diode based RIS and the polarization mode, controllable bit resolution, working status of RIS, which is validated by practical experimental results.
Deep learning-based autoencoder has shown considerable potential in channel state information (CSI) feedback. However, the excellent feedback performance achieved by autoencoder is at the expense of a high computational complexity. In this paper, a knowledge distillation-based neural network lightweight strategy is introduced to deep learning-based CSI feedback to reduce the computational requirement. The key idea is to transfer the dark knowledge learned by a complicated teacher network to a lightweight student network, thereby improving the performance of the student network. First, an autoencoder distillation method is proposed by forcing the student autoencoder to mimic the output of the teacher autoencoder. Then, given the more limited computational power at the user equipment, an encoder distillation method is proposed where distillation is only performed to student encoder at the user equipment and the teacher decoder is directly used at the base stataion. The numerical simulation results show that the performance of student autoencoder can be considerably improved after knowledge distillation and encoder distillation can further improve the feedback performance and reduce the complexity.
This paper considers an active reconfigurable intelligent surface (RIS)-aided communication system, where an M-antenna base station (BS) transmits data symbols to a single-antenna user via an N-element active RIS. We use two-timescale channel state information (CSI) in our system, so that the channel estimation overhead and feedback overhead can be decreased dramatically. A closed-form approximate expression of the achievable rate (AR) is derived and the phase shift at the active RIS is optimized. In addition, we compare the performance of the active RIS system with that of the passive RIS system. The conclusion shows that the active RIS system achieves a lager AR than the passive RIS system.
Semantic communication has become a popular research area due its high spectrum efficiency and error-correction performance. Some studies use deep learning to extract semantic features, which usually form end-to-end semantic communication systems and are hard to address the varying wireless environments. Therefore, the novel semantic-based coding methods and performance metrics have been investigated and the designed semantic systems consist of various modules as in the conventional communications but with improved functions. This article discusses recent achievements in the state-of-art semantic communications exploiting the conventional modules in wireless systems. We demonstrate through two examples that the traditional hybrid automatic repeat request and modulation methods can be redesigned for novel semantic coding and metrics to further improve the performance of wireless semantic communications. At the end of this article, some open issues are identified.
In this paper, we propose a symbol-level precoding (SLP) design that aims to minimize the weighted mean square error between the received signal and the constellation point located in the constructive interference region (CIR). Unlike most existing SLP schemes that rely on channel state information (CSI) only, the proposed scheme exploits both CSI and the distribution information of the noise to achieve improved performance. We firstly propose a simple generic description of CIR that facilitates the subsequent SLP design. Such an objective can further be formulated as a nonnegative least squares (NNLS) problem, which can be solved efficiently by the active-set algorithm. Furthermore, the weighted minimum mean square error (WMMSE) precoding and the existing SLP can be easily verified as special cases of the proposed scheme. Finally, simulation results show that the proposed precoding outperforms the state-of-the-art SLP schemes in full signal-to-noise ratio ranges in both uncoded and coded systems without additional complexity over conventional SLP.
This letter presents a sensing-communication-computing-control (SC3) integrated satellite unmanned aerial vehicle (UAV) network, where the UAV is equipped with on-board sensors, mobile edge computing (MEC) servers, base stations and satellite communication module. Like the nervous system, this integrated network is capable of organizing multiple field robots in remote areas, so as to perform mission-critical tasks which are dangerous for human. Aiming at activating this nervous system with multiple SC3 loops, we present a control-oriented optimization problem. Different from traditional studies which mainly focused on communication metrics, we address the power allocation issue to minimize the sum linear quadratic regulator (LQR) control cost of all SC3 loops. Specifically, we show the convexity of the formulated problem and reveal the relationship between optimal transmit power and intrinsic entropy rate of different SC3 loops. For the assure-to-be-stable case, we derive a closed-form solution for ease of practical applications. After demonstrating the superiority of the control-oriented power allocation, we further highlight its difference with classic capacity-oriented water-filling method.
Reconfigurable intelligent surface (RIS) makes it possible to create an intelligent electromagnetic environment. The low hardware cost makes ultra-large (XL) RIS an attractive performance enhancement scheme, but it brings the challenge of near-field propagation channels. This makes the localization and channel estimation more complicated. In this paper, we consider the spherical wavefront propagation in the near field of the millimeter-wave/sub-Terahertz (mmWave/subTHz) localization system with the assistance of a RIS. A near-field joint channel estimation and localization (NF-JCEL) algorithm is proposed based on the derived second-order Fresnel approximation of the near-field channel model. To be specific, we first decouple the user equipment (UE) distances and angles of arrival (AoAs) through a down-sampled Toeplitz covariance matrix, so that the vertical and azimuth AoAs in the array steering vectors can be estimated separately with low complexities. Then, the UE distance can be estimated by the simple one-dimensional search, and the channel attenuation coefficients are obtained through the orthogonal matching pursuit (OMP) method. Simulation results validate the superiority of the proposed NF-JCEL algorithm to the conventional far-field algorithm, and show that higher resolution accuracy can be obtained by the proposed algorithm.
The 3rd Generation Partnership Project has started the study of Release 18 in 2021. Artificial intelligence (AI)-native air interface is one of the key features of Release 18, where AI for channel state information (CSI) feedback enhancement is selected as the representative use case. This article provides a comprehensive overview of AI for CSI feedback enhancement in 5G-Advanced and 6G. The scope of the AI for CSI feedback enhancement in 5G-Advanced, including overhead reduction, accuracy improvement, and channel prediction, is first presented and discussed. Then, three representative frameworks of AI-enabled CSI feedback, including one-sided implicit feedback, two-sided autoencoder-based implicit feedback, and two-sided explicit feedback, are introduced and compared. Finally, the considerations in the standardization of AI for CSI feedback enhancement, especially focusing on evaluation, complexity, collaboration, generalization, information sharing, joint design with channel prediction, and reciprocity, have been identified and discussed. This article provides a guideline for the standardization study of the AI-based CSI feedback enhancement.
Many performance gains achieved by massive multiple-input and multiple-output depend on the accuracy of the downlink channel state information (CSI) at the transmitter (base station), which is usually obtained by estimating at the receiver (user terminal) and feeding back to the transmitter. The overhead of CSI feedback occupies substantial uplink bandwidth resources, especially when the number of the transmit antennas is large. Deep learning (DL)-based CSI feedback refers to CSI compression and reconstruction by a DL-based autoencoder and can greatly reduce feedback overhead. In this paper, a comprehensive overview of state-of-the-art research on this topic is provided, beginning with basic DL concepts widely used in CSI feedback and then categorizing and describing some existing DL-based feedback works. The focus is on novel neural network architectures and utilization of communication expert knowledge to improve CSI feedback accuracy. Works on bit-level CSI feedback and joint design of CSI feedback with other communication modules are also introduced, and some practical issues, including training dataset collection, online training, complexity, generalization, and standardization effect, are discussed. At the end of the paper, some challenges and potential research directions associated with DL-based CSI feedback in future wireless communication systems are identified.