This paper investigates the achievable rate maximization problem of a downlink unmanned aerial vehicle (UAV)-enabled communication system aided by an intelligent omni-surface (IOS). Different from the state-of-the-art reconfigurable intelligent surface (RIS) that only reflects incident signals, the IOS can simultaneously reflect and transmit the signals, thereby providing full-dimensional rate enhancement. To tackle such a problem, we formulate it by jointly optimizing the IOS's phase shift and the UAV trajectory. Although it is difficult to solve it optimally due to its non-convexity, we propose an efficient iterative algorithm to obtain a high-quality suboptimal solution. Simulation results show that the IOS-assisted UAV communications can achieve more significant improvement in achievable rates than other benchmark schemes.
Over-the-air computation (AirComp) has been recognized as a low-latency solution for wireless sensor data fusion, where multiple sensors send their measurement signals to a receiver simultaneously for computation. Most existing work only considered performing AirComp over a single frequency channel. However, for a sensor network with a massive number of nodes, a single frequency channel may not be sufficient to accommodate the large number of sensors, and the AirComp performance will be very limited. So it is highly desirable to have more frequency channels for large-scale AirComp systems to benefit from multi-channel diversity. In this letter, we propose an $M$-frequency AirComp system, where each sensor selects a subset of the $M$ frequencies and broadcasts its signal over these channels under a certain power constraint. We derive the optimal sensors' transmission and receiver's signal processing methods separately, and develop an algorithm for joint design to achieve the best AirComp performance. Numerical results show that increasing one frequency channel can improve the AirComp performance by threefold compared to the single-frequency case.
As an attempt to tackle the low-data-rate issue of the conventional LoRa systems, we propose two novel frequency-bin-index (FBI) LoRa schemes. In scheme I, the indices of starting frequency bins (SFBs) are utilized to carry the information bits. To facilitate the actual implementation, the SFBs of each LoRa signal are divided into several groups prior to the modulation process in the proposed FBI-LoRa system. To further improve the system flexibility, we formulate a generalized modulation scheme and propose scheme II by treating the SFB groups as an additional type of transmission entity. In scheme II, the combination of SFB indices and that of SFB group indices are both exploited to carry the information bits. We derive the theoretical expressions for bit-error-rate (BER) and throughput of the proposed FBI-LoRa system with two modulation schemes over additive white Gaussian noise (AWGN) and Rayleigh fading channels. Theoretical and simulation results show that the proposed FBI-LoRa schemes can significantly increases the transmission throughput compared with the existing LoRa systems at the expense of a slight loss in BER performance. Thanks to the appealing superiorities, the proposed FBI-LoRa system is a promising alternative for high-data-rate Internet of Things (IoT) applications.
Over-the-air computation (AirComp) leveraging the superposition property of wireless multiple-access channel (MAC), is a promising technique for effective data collection and computation of large-scale wireless sensor measurements in Internet of Things applications. Most existing work on AirComp only considered computation of spatial-and-temporal independent sensor signals, though in practice different sensor measurement signals are usually correlated. In this letter, we propose an AirComp system with spatial-and-temporal correlated sensor signals, and formulate the optimal AirComp policy design problem for achieving the minimum computation mean-squared error (MSE). We develop the optimal AirComp policy with the minimum computation MSE in each time step by utilizing the current and the previously received signals. We also propose and optimize a low-complexity AirComp policy in closed form with the performance approaching to the optimal policy.
As one of the key communication scenarios in the 5th and also the 6th generation (6G) cellular networks, ultra-reliable and low-latency communications (URLLC) will be central for the development of various emerging mission-critical applications. The state-of-the-art mobile communication systems do not fulfill the end-to-end delay and overall reliability requirements of URLLC. A holistic framework that takes into account latency, reliability, availability, scalability, and decision-making under uncertainty is lacking. Driven by recent breakthroughs in deep neural networks, deep learning algorithms have been considered as promising ways of developing enabling technologies for URLLC in future 6G networks. This tutorial illustrates how to integrate theoretical knowledge (models, analysis tools, and optimization frameworks) of wireless communications into different kinds of deep learning algorithms for URLLC. We first introduce the background of URLLC and review promising network architectures and deep learning frameworks in 6G. To better illustrate how to improve learning algorithms with theoretical knowledge, we revisit model-based analysis tools and cross-layer optimization frameworks for URLLC. Following that, we examine the potential of applying supervised/unsupervised deep learning and deep reinforcement learning in URLLC and summarize related open problems. Finally, we provide simulation and experimental results to validate the effectiveness of different learning algorithms and discuss future directions.
Envisioned as a promising component of the future wireless Internet-of-Things (IoT) networks, the non-orthogonal multiple access (NOMA) technique can support massive connectivity with a significantly increased spectral efficiency. Cooperative NOMA is able to further improve the communication reliability of users under poor channel conditions. However, the conventional system design suffers from several inherent limitations and is not optimized from the bit error rate (BER) perspective. In this paper, we develop a novel deep cooperative NOMA scheme, drawing upon the recent advances in deep learning (DL). We develop a novel hybrid-cascaded deep neural network (DNN) architecture such that the entire system can be optimized in a holistic manner. On this basis, we construct multiple loss functions to quantify the BER performance and propose a novel multi-task oriented two-stage training method to solve the end-to-end training problem in a self-supervised manner. The learning mechanism of each DNN module is then analyzed based on information theory, offering insights into the proposed DNN architecture and its corresponding training method. We also adapt the proposed scheme to handle the power allocation (PA) mismatch between training and inference and incorporate it with channel coding to combat signal deterioration. Simulation results verify its advantages over orthogonal multiple access (OMA) and the conventional cooperative NOMA scheme in various scenarios.
To accommodate diverse Quality-of-Service (QoS) requirements in the 5th generation cellular networks, base stations need real-time optimization of radio resources in time-varying network conditions. This brings high computing overheads and long processing delays. In this work, we develop a deep learning framework to approximate the optimal resource allocation policy that minimizes the total power consumption of a base station by optimizing bandwidth and transmit power allocation. We find that a fully-connected neural network (NN) cannot fully guarantee the QoS requirements due to the approximation errors and quantization errors of the numbers of subcarriers. To tackle this problem, we propose a cascaded structure of NNs, where the first NN approximates the optimal bandwidth allocation, and the second NN outputs the transmit power required to satisfy the QoS requirement with given bandwidth allocation. Considering that the distribution of wireless channels and the types of services in the wireless networks are non-stationary, we apply deep transfer learning to update NNs in non-stationary wireless networks. Simulation results validate that the cascaded NNs outperform the fully connected NN in terms of QoS guarantee. In addition, deep transfer learning can reduce the number of training samples required to train the NNs remarkably.
In the future 6th generation networks, ultra-reliable and low-latency communications (URLLC) will lay the foundation for emerging mission-critical applications that have stringent requirements on end-to-end delay and reliability. Existing works on URLLC are mainly based on theoretical models and assumptions. The model-based solutions provide useful insights, but cannot be directly implemented in practice. In this article, we first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC, and discuss some open problems of these methods. To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC. The basic idea is to merge theoretical models and real-world data in analyzing the latency and reliability and training deep neural networks (DNNs). Deep transfer learning is adopted in the architecture to fine-tune the pre-trained DNNs in non-stationary networks. Further considering that the computing capacity at each user and each mobile edge computing server is limited, federated learning is applied to improve the learning efficiency. Finally, we provide some experimental and simulation results and discuss some future directions.
Crowd scene analysis receives growing attention due to its wide applications. Grasping the accurate crowd location (rather than merely crowd count) is important for spatially identifying high-risk regions in congested scenes. In this paper, we propose a Compressed Sensing based Output Encoding (CSOE) scheme, which casts detecting pixel coordinates of small objects into a task of signal regression in encoding signal space. CSOE helps to boost localization performance in circumstances where targets are highly crowded without huge scale variation. In addition, proper receptive field sizes are crucial for crowd analysis due to human size variations. We create Multiple Dilated Convolution Branches (MDCB) that offers a set of different receptive field sizes, to improve localization accuracy when objects sizes change drastically in an image. Also, we develop an Adaptive Receptive Field Weighting (ARFW) module, which further deals with scale variation issue by adaptively emphasizing informative channels that have proper receptive field size. Experiments demonstrate the effectiveness of the proposed method, which achieves state-of-the-art performance across four mainstream datasets, especially achieves excellent results in highly crowded scenes. More importantly, experiments support our insights that it is crucial to tackle target size variation issue in crowd analysis task, and casting crowd localization as regression in encoding signal space is quite effective for crowd analysis.
Multi-parameter cognition in a cognitive radio network (CRN) provides a more thorough understanding of the radio environments, and could potentially lead to far more intelligent and efficient spectrum usage for a secondary user. In this paper, we investigate the multi-parameter cognition problem for a CRN where the primary transmitter (PT) radiates multiple transmit power levels, and propose a learning-based two-stage spectrum sharing strategy. We first propose a data-driven/machine learning based multi-level spectrum sensing scheme, including the spectrum learning (Stage I) and prediction (the first part in Stage II). This fully blind sensing scheme does not require any prior knowledge of the PT power characteristics. Then, based on a novel normalized power level alignment metric, we propose two prediction-transmission structures, namely periodic and non-periodic, for spectrum access (the second part in Stage II), which enable the secondary transmitter (ST) to closely follow the PT power level variation. The periodic structure features a fixed prediction interval, while the non-periodic one dynamically determines the interval with a proposed reinforcement learning algorithm to further improve the alignment metric. Finally, we extend the prediction-transmission structure to an online scenario, where the number of PT power levels might change as a consequence of PT adapting to the environment fluctuation or quality of service variation. The simulation results demonstrate the effectiveness of the proposed strategy in various scenarios.