Future wireless communications look forward to constructing a ubiquitous intelligent information network with high data rates through cost-efficient devices. Benefiting from the tunability and programmability of metamaterials, the reconfigurable holographic surface (RHS) composed of numerous metamaterial radiation elements is developed as a promising solution to fulfill such challenging visions. The RHS is more likely to serve as an ultra-thin and lightweight surface antenna integrated with the transceiver to generate beams with desirable directions by leveraging the holographic principle. This is different from reconfigurable intelligent surfaces (RISs) widely used as passive relays due to the reflection characteristic. In this article, we investigate RHS-aided wireless communications. Starting with a basic introduction of the RHS including its hardware structure, holographic principle, and fabrication methodologies, we propose a hybrid beamforming scheme for RHS-aided multi-user communication systems. A joint sum-rate maximization algorithm is then developed where the digital beamforming performed at the base station and the holographic beamforming performed at the RHS are optimized iteratively. Furthermore, key challenges in RHS-aided wireless communications are also discussed.
In this chapter, we will mainly focus on collaborative training across wireless devices. Training a ML model is equivalent to solving an optimization problem, and many distributed optimization algorithms have been developed over the last decades. These distributed ML algorithms provide data locality; that is, a joint model can be trained collaboratively while the data available at each participating device remains local. This addresses, to some extend, the privacy concern. They also provide computational scalability as they allow exploiting computational resources distributed across many edge devices. However, in practice, this does not directly lead to a linear gain in the overall learning speed with the number of devices. This is partly due to the communication bottleneck limiting the overall computation speed. Additionally, wireless devices are highly heterogeneous in their computational capabilities, and both their computation speed and communication rate can be highly time-varying due to physical factors. Therefore, distributed learning algorithms, particularly those to be implemented at the wireless network edge, must be carefully designed taking into account the impact of time-varying communication network as well as the heterogeneous and stochastic computation capabilities of devices.
A correlated transmission and reflection (T&R) phase-shift model is proposed for passive lossless simultaneously transmitting and reflecting reconfigurable intelligent surfaces (STAR-RISs). A STAR-RIS-aided two-user downlink communication system is investigated for both orthogonal multiple access (OMA) and non-orthogonal multiple access (NOMA). To evaluate the impact of the correlated T&R phase-shift model on the communication performance, three phase-shift configuration strategies are developed, namely the primary-secondary phase-shift configuration (PS-PSC), the diversity preserving phase-shift configuration (DP-PSC), and the T/R-group phase-shift configuration (TR-PSC) strategies. Furthermore, we derive the outage probabilities for the three proposed phase-shift configuration strategies as well as for those of the random phase-shift configuration and the independent phase-shift model, which constitute performance lower and upper bounds, respectively. Then, the diversity order of each strategy is investigated based on the obtained analytical results. It is shown that the proposed DP-PSC strategy achieves full diversity order simultaneously for users located on both sides of the STAR-RIS. Moreover, power scaling laws are derived for the three proposed strategies and for the random phase-shift configuration. Numerical simulations reveal a performance gain if the users on both sides of the STAR-RIS are served by NOMA instead of OMA. Moreover, it is shown that the proposed DP-PSC strategy yields the same diversity order as achieved by STAR-RISs under the independent phase-shift model and a comparable power scaling law with only 4 dB reduction in received power.
In this paper, we present a communication-efficient federated learning framework inspired by quantized compressed sensing. The presented framework consists of gradient compression for wireless devices and gradient reconstruction for a parameter server (PS). Our strategy for gradient compression is to sequentially perform block sparsification, dimensional reduction, and quantization. Thanks to gradient sparsification and quantization, our strategy can achieve a higher compression ratio than one-bit gradient compression. For accurate aggregation of the local gradients from the compressed signals at the PS, we put forth an approximate minimum mean square error (MMSE) approach for gradient reconstruction using the expectation-maximization generalized-approximate-message-passing (EM-GAMP) algorithm. Assuming Bernoulli Gaussian-mixture prior, this algorithm iteratively updates the posterior mean and variance of local gradients from the compressed signals. We also present a low-complexity approach for the gradient reconstruction. In this approach, we use the Bussgang theorem to aggregate local gradients from the compressed signals, then compute an approximate MMSE estimate of the aggregated gradient using the EM-GAMP algorithm. We also provide a convergence rate analysis of the presented framework. Using the MNIST dataset, we demonstrate that the presented framework achieves almost identical performance with the case that performs no compression, while significantly reducing communication overhead for federated learning.
Cellular networks, such as 5G systems, are becoming increasingly complex for supporting various deployment scenarios and applications. Embracing artificial intelligence (AI) in 5G evolution is critical to managing the complexity and fueling the next quantum leap in 6G cellular networks. In this article, we share our experience and best practices in applying AI in cellular networks. We first present a primer on the state of the art of AI in cellular networks, including basic concepts and recent key advances. Then we discuss 3GPP standardization aspects and share various design rationales influencing standardization. We also present case studies with real network data to showcase how AI can improve network performance and enable network automation.
To leverage massive distributed data and computation resources, machine learning in the network edge is considered to be a promising technique especially for large-scale model training. Federated learning (FL), as a paradigm of collaborative learning techniques, has obtained increasing research attention with the benefits of communication efficiency and improved data privacy. Due to the lossy communication channels and limited communication resources (e.g., bandwidth and power), it is of interest to investigate fast responding and accurate FL schemes over wireless systems. Hence, we investigate the problem of jointly optimized communication efficiency and resources for FL over wireless Internet of things (IoT) networks. To reduce complexity, we divide the overall optimization problem into two sub-problems, i.e., the client scheduling problem and the resource allocation problem. To reduce the communication costs for FL in wireless IoT networks, a new client scheduling policy is proposed by reusing stale local model parameters. To maximize successful information exchange over networks, a Lagrange multiplier method is first leveraged by decoupling variables including power variables, bandwidth variables and transmission indicators. Then a linear-search based power and bandwidth allocation method is developed. Given appropriate hyper-parameters, we show that the proposed communication-efficient federated learning (CEFL) framework converges at a strong linear rate. Through extensive experiments, it is revealed that the proposed CEFL framework substantially boosts both the communication efficiency and learning performance of both training loss and test accuracy for FL over wireless IoT networks compared to a basic FL approach with uniform resource allocation.
A simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) aided communication system is investigated, where an access point sends information to two users located on each side of the STAR-RIS. Different from current works assuming that the phase-shift coefficients for transmission and reflection can be independently adjusted, which is non-trivial to realize for purely passive STAR-RISs, a coupled transmission and reflection phase-shift model is considered. Based on this model, a power consumption minimization problem is formulated for both non-orthogonal multiple access (NOMA) and orthogonal multiple access (OMA). In particular, the amplitude and phase-shift coefficients for transmission and reflection are jointly optimized, subject to the rate constraints of the users. To solve this non-convex problem, an efficient element-wise alternating optimization algorithm is developed to find a high-quality suboptimal solution, whose complexity scales only linearly with the number of STAR elements. Finally, numerical results are provided for both NOMA and OMA to validate the effectiveness of the proposed algorithm by comparing its performance with that of STAR-RISs using the independent phase-shift model and conventional reflecting/transmitting-only RISs.
The convergence of mobile edge computing (MEC) and blockchain is transforming the current computing services in mobile networks, by offering task offloading solutions with security enhancement empowered by blockchain mining. Nevertheless, these important enabling technologies have been studied separately in most existing works. This article proposes a novel cooperative task offloading and block mining (TOBM) scheme for a blockchain-based MEC system where each edge device not only handles data tasks but also deals with block mining for improving the system utility. To address the latency issues caused by the blockchain operation in MEC, we develop a new Proof-of-Reputation consensus mechanism based on a lightweight block verification strategy. A multi-objective function is then formulated to maximize the system utility of the blockchain-based MEC system, by jointly optimizing offloading decision, channel selection, transmit power allocation, and computational resource allocation. We propose a novel distributed deep reinforcement learning-based approach by using a multi-agent deep deterministic policy gradient algorithm. We then develop a game-theoretic solution to model the offloading and mining competition among edge devices as a potential game, and prove the existence of a pure Nash equilibrium. Simulation results demonstrate the significant system utility improvements of our proposed scheme over baseline approaches.
Next-generation systems aim to increase both the speed and responsiveness of wireless communications, while supporting compelling applications such as edge and cloud computing, remote-Health, vehicle-to-infrastructure communications, etc. As these applications are expected to carry confidential personal data, ensuring user privacy becomes a critical issue. In contrast to traditional security and privacy designs that aim to prevent confidential information from being eavesdropped upon by adversaries, or learned by unauthorized parties, in this paper we consider designs that mask the users' identities during communication, hence resulting in anonymous communications. In particular, we examine the recent interest in physical layer (PHY) anonymous solutions. This line of research departs from conventional higher layer anonymous authentication, encryption and routing protocols, and judiciously manipulates the signaling pattern of transmitted signals in order to mask the senders' PHY characteristics. We first discuss the concept of anonymity at the PHY, and illustrate a strategy that is able to unmask the sender's identity by analyzing his or her PHY information only, i.e., signalling patterns and the inherent fading characteristics. Subsequently, we overview the emerging area of anonymous precoding to preserve the sender's anonymity, while ensuring high receiver-side signal-to-interference-plus-noise ratio (SINR) for communication. This family of anonymous precoding designs represents a new approach to providing anonymity at the PHY, introducing a new dimension for privacy-preserving techniques.
A status updating system is considered in which data from multiple sources are sampled by an energy harvesting sensor and transmitted to a remote destination through an erasure channel. The goal is to deliver status updates of all sources in a timely manner, such that the cumulative long-term average age-of-information (AoI) is minimized. The AoI for each source is defined as the time elapsed since the generation time of the latest successful status update received at the destination from that source. Transmissions are subject to energy availability, which arrives in units according to a Poisson process, with each energy unit capable of carrying out one transmission from only one source. The sensor is equipped with a unit-sized battery to save the incoming energy. A scheduling policy is designed in order to determine which source is sampled using the available energy. The problem is studied in two main settings: no erasure status feedback, and perfect instantaneous feedback.