Traditional physical layer secure beamforming is achieved via precoding before signal transmission using channel state information (CSI). However, imperfect CSI will compromise the performance with imperfect beamforming and potential information leakage. In addition, multiple RF chains and antennas are needed to support the narrow beam generation, which complicates hardware implementation and is not suitable for resource-constrained Internet-of-Things (IoT) devices. Moreover, with the advancement of hardware and artificial intelligence (AI), low-cost and intelligent eavesdropping to wireless communications is becoming increasingly detrimental. In this paper, we propose a multi-carrier based multi-band waveform-defined security (WDS) framework, independent from CSI and RF chains, to defend against AI eavesdropping. Ideally, the continuous variations of sub-band structures lead to an infinite number of spectral features, which can potentially prevent brute-force eavesdropping. Sub-band spectral pattern information is efficiently constructed at legitimate users via a proposed chaotic sequence generator. A novel security metric, termed signal classification accuracy (SCA), is used to evaluate the security robustness under AI eavesdropping. Communication error probability and complexity are also investigated to show the reliability and practical capability of the proposed framework. Finally, compared to traditional secure beamforming techniques, the proposed multi-band WDS framework reduces power consumption by up to six times.
As an attractive enabling technology for next-generation wireless communications, network slicing supports diverse customized services in the global space-air-ground integrated network (SAGIN) with diverse resource constraints. In this paper, we dynamically consider three typical classes of radio access network (RAN) slices, namely high-throughput slices, low-delay slices and wide-coverage slices, under the same underlying physical SAGIN. The throughput, the service delay and the coverage area of these three classes of RAN slices are jointly optimized in a non-scalar form by considering the distinct channel features and service advantages of the terrestrial, aerial and satellite components of SAGINs. A joint central and distributed multi-agent deep deterministic policy gradient (CDMADDPG) algorithm is proposed for solving the above problem to obtain the Pareto optimal solutions. The algorithm first determines the optimal virtual unmanned aerial vehicle (vUAV) positions and the inter-slice sub-channel and power sharing by relying on a centralized unit. Then it optimizes the intra-slice sub-channel and power allocation, and the virtual base station (vBS)/vUAV/virtual low earth orbit (vLEO) satellite deployment in support of three classes of slices by three separate distributed units. Simulation results verify that the proposed method approaches the Pareto-optimal exploitation of multiple RAN slices, and outperforms the benchmarkers.
With a growing interest in outer space, space robots have become a focus of exploration. To coordinate them for unmanned space exploration, we propose to use the "mother-daughter structure". In this setup, the mother spacecraft orbits the planet, while daughter probes are distributed across the surface. The mother spacecraft senses the environment, computes control commands and distributes them to daughter probes to take actions. They synergistically form sensing-communication-computing-control ($\mathbf{SC^3}$) loops, which are indivisible. We thereby optimize the spacecraft-probe downlink within $\mathbf{SC^3}$ loops to minimize the sum linear quadratic regulator (LQR) cost. The optimization variables are block length and transmit power. On account of the cycle time constraint, the spacecraft-probe downlink operates in the finite block length (FBL) regime. To solve the nonlinear mixed-integer problem, we first identify the optimal block length and then transform the power allocation problem into a tractable convex one. Additionally, we derive the approximate closed-form solutions for the proposed scheme and also for the max-sum rate scheme and max-min rate scheme. On this basis, we reveal their different power allocation principles. Moreover, we find that for time-insensitive control tasks, the proposed scheme demonstrates equivalence to the max-min rate scheme. These findings are verified through simulations.
Reconfigurable intelligent surface (RIS) devices have emerged as an effective way to control the propagation channels for enhancing the end users' performance. However, RIS optimization involves configuring the radio frequency (RF) response of a large number of radiating elements, which is challenging in real-world applications due to high computational complexity. In this paper, a model-free cross-entropy (CE) algorithm is proposed to optimize the binary RIS configuration for improving the signal-to-noise ratio (SNR) at the receiver. One key advantage of the proposed method is that it only needs system performance parameters, e.g., the received SNR, without the need for channel models or channel estimation. Both simulations and experiments are conducted to evaluate the performance of the proposed CE algorithm. The results demonstrate that the CE algorithm outperforms benchmark algorithms, and shows stronger channel hardening with increasing numbers of RIS elements.
This paper investigates deep learning techniques to predict transmit beamforming based on only historical channel data without current channel information in the multiuser multiple-input-single-output downlink. This will significantly reduce the channel estimation overhead and improve the spectrum efficiency especially in high-mobility vehicular communications. Specifically, we propose a joint learning framework that incorporates channel prediction and power optimization, and produces prediction for transmit beamforming directly. In addition, we propose to use the attention mechanism in the Long Short-Term Memory Recurrent Neural Networks to improve the accuracy of channel prediction. Simulation results using both a simple autoregressive process model and the more realistic 3GPP spatial channel model verify that our proposed predictive beamforming scheme can significantly improve the effective spectrum efficiency compared to traditional channel estimation and the method that separately predicts channel and then optimizes beamforming.
Simultaneous wireless information and power transfer (SWIPT) has long been proposed as a key solution for charging and communicating with low-cost and low-power devices. However, the employment of radio frequency (RF) signals for information/power transfer needs to comply with international health and safety regulations. In this paper, we provide a complete framework for the design and analysis of far-field SWIPT under safety constraints. In particular, we deal with two RF exposure regulations, namely, the specific absorption rate (SAR) and the maximum permissible exposure (MPE). The state-of-the-art regarding SAR and MPE is outlined together with a description as to how these can be modeled in the context of communication networks. We propose a deep learning approach for the design of robust beamforming subject to specific information, energy harvesting and SAR constraints. Furthermore, we present a thorough analytical study for the performance of large-scale SWIPT systems, in terms of information and energy coverage under MPE constraints. This work provides insights with regards to the optimal SWIPT design as well as the potentials from the proper development of SWIPT systems under health and safety restrictions.
This paper studies the fast adaptive beamforming for the multiuser multiple-input single-output downlink. Existing deep learning-based approaches assume that training and testing channels follow the same distribution which causes task mismatch, when the testing environment changes. Although meta learning can deal with the task mismatch, it relies on labelled data and incurs high complexity in the pre-training and fine tuning stages. We propose a simple yet effective adaptive framework to solve the mismatch issue, which trains an embedding model as a transferable feature extractor, followed by fitting the support vector regression. Compared to the existing meta learning algorithm, our method does not necessarily need labelled data in the pre-training and does not need fine-tuning of the pre-trained model in the adaptation. The effectiveness of the proposed method is verified through two well-known applications, i.e., the signal to interference plus noise ratio balancing problem and the sum rate maximization problem. Furthermore, we extend our proposed method to online scenarios in non-stationary environments. Simulation results demonstrate the advantages of the proposed algorithm in terms of both performance and complexity. The proposed framework can also be applied to general radio resource management problems.
Accurate downlink channel information is crucial to the beamforming design, but it is difficult to obtain in practice. This paper investigates a deep learning-based optimization approach of the downlink beamforming to maximize the system sum rate, when only the uplink channel information is available. Our main contribution is to propose a model-driven learning technique that exploits the structure of the optimal downlink beamforming to design an effective hybrid learning strategy with the aim to maximize the sum rate performance. This is achieved by jointly considering the learning performance of the downlink channel, the power and the sum rate in the training stage. The proposed approach applies to generic cases in which the uplink channel information is available, but its relation to the downlink channel is unknown and does not require an explicit downlink channel estimation. We further extend the developed technique to massive multiple-input multiple-output scenarios and achieve a distributed learning strategy for multicell systems without an inter-cell signalling overhead. Simulation results verify that our proposed method provides the performance close to the state of the art numerical algorithms with perfect downlink channel information and significantly outperforms existing data-driven methods in terms of the sum rate.
Beamforming is evidently a core technology in recent generations of mobile communication networks. Nevertheless, an iterative process is typically required to optimize the parameters, making it ill-placed for real-time implementation due to high complexity and computational delay. Heuristic solutions such as zero-forcing (ZF) are simpler but at the expense of performance loss. Alternatively, deep learning (DL) is well understood to be a generalizing technique that can deliver promising results for a wide range of applications at much lower complexity if it is sufficiently trained. As a consequence, DL may present itself as an attractive solution to beamforming. To exploit DL, this article introduces general data- and model-driven beamforming neural networks (BNNs), presents various possible learning strategies, and also discusses complexity reduction for the DL-based BNNs. We also offer enhancement methods such as training-set augmentation and transfer learning in order to improve the generality of BNNs, accompanied by computer simulation results and testbed results showing the performance of such BNN solutions.