Signal processing and communication communities have witnessed the rise of many exciting communication technologies in recent years. Notable examples include alternative waveforms, massive multiple-input multiple-output (MIMO) signaling, non-orthogonal multiple access (NOMA), joint communications and sensing, sparse vector coding, index modulation, and so on. It is inevitable that 6G wireless networks will require a rethinking of wireless communication systems and technologies, particularly at the physical layer (PHY), considering the fact that the cellular industry reached another important milestone with the development of 5G wireless networks with diverse applications. Within this perspective, this article aims to shed light on the rising concept of reconfigurable intelligent surface (RIS)-empowered communications towards 6G wireless networks. We discuss the recent developments in the field and put forward promising candidates for future research and development. Specifically, we put our emphasis on active, transmitter-type, transmissive-reflective, and standalone RISs, by discussing their advantages and disadvantages compared to reflective RIS designs. Finally, we also envision an ultimate RIS architecture, which is able to adjust its operation modes dynamically, and introduce the new concept of PHY slicing over RISs towards 6G wireless networks.
This paper investigates a novel intelligent reflecting surface (IRS)-based symbiotic radio (SR) system architecture consisting of a transmitter, an IRS, and an information receiver (IR). The primary transmitter communicates with the IR and at the same time assists the IRS in forwarding information to the IR. Based on the IRS's symbol period, we distinguish two scenarios, namely, commensal SR (CSR) and parasitic SR (PSR), where two different techniques for decoding the IRS signals at the IR are employed. We formulate bit error rate (BER) minimization problems for both scenarios by jointly optimizing the active beamformer at the base station and the phase shifts at the IRS, subject to a minimum primary rate requirement. Specifically, for the CSR scenario, a penalty-based algorithm is proposed to obtain a high-quality solution, where semi-closed-form solutions for the active beamformer and the IRS phase shifts are derived based on Lagrange duality and Majorization-Minimization methods, respectively. For the PSR scenario, we apply a bisection search-based method, successive convex approximation, and difference of convex programming to develop a computationally efficient algorithm, which converges to a locally optimal solution. Simulation results demonstrate the effectiveness of the proposed algorithms and show that the proposed SR techniques are able to achieve a lower BER than benchmark schemes.
The Internet of Things (IoT) is penetrating many facets of our daily life with the proliferation of intelligent services and applications empowered by artificial intelligence (AI). Traditionally, AI techniques require centralized data collection and processing that may not be feasible in realistic application scenarios due to the high scalability of modern IoT networks and growing data privacy concerns. Federated Learning (FL) has emerged as a distributed collaborative AI approach that can enable many intelligent IoT applications, by allowing for AI training at distributed IoT devices without the need for data sharing. In this article, we provide a comprehensive survey of the emerging applications of FL in IoT networks, beginning from an introduction to the recent advances in FL and IoT to a discussion of their integration. Particularly, we explore and analyze the potential of FL for enabling a wide range of IoT services, including IoT data sharing, data offloading and caching, attack detection, localization, mobile crowdsensing, and IoT privacy and security. We then provide an extensive survey of the use of FL in various key IoT applications such as smart healthcare, smart transportation, Unmanned Aerial Vehicles (UAVs), smart cities, and smart industry. The important lessons learned from this review of the FL-IoT services and applications are also highlighted. We complete this survey by highlighting the current challenges and possible directions for future research in this booming area.
Broader applications of the Internet of Things (IoT) are expected in the forthcoming 6G system, although massive IoT is already a key scenario in 5G, predominantly relying on physical layer solutions inherited from 4G LTE and primarily using orthogonal multiple access (OMA). In 6G IoT, supporting a massive number of connections will be required for diverse services of the vertical sectors, prompting fundamental studies on how to improve the spectral efficiency of the system. One of the key enabling technologies is non-orthogonal multiple access (NOMA). This paper consists of two parts. In the first part, finite block length theory and the diversity order of multi-user systems will be used to show the significant potential of NOMA compared to traditional OMA. The supremacy of NOMA over OMA is particularly pronounced for asynchronous contention-based systems relying on imperfect link adaptation, which are commonly assumed for massive IoT systems. To approach these performance bounds, in the second part of the paper, several promising technology directions are proposed for 6G massive IoT, including linear spreading, joint spreading & modulation, multi-user channel coding in the context of various techniques for practical uncoordinated transmissions, cell-free operations, etc., from the perspective of NOMA.
Eigenvector perturbation analysis plays a vital role in various statistical data science applications. A large body of prior works, however, focused on establishing $\ell_{2}$ eigenvector perturbation bounds, which are often highly inadequate in addressing tasks that rely on fine-grained behavior of an eigenvector. This paper makes progress on this by studying the perturbation of linear functions of an unknown eigenvector. Focusing on two fundamental problems -- matrix denoising and principal component analysis -- in the presence of Gaussian noise, we develop a suite of statistical theory that characterizes the perturbation of arbitrary linear functions of an unknown eigenvector. In order to mitigate a non-negligible bias issue inherent to the natural "plug-in" estimator, we develop de-biased estimators that (1) achieve minimax lower bounds for a family of scenarios (modulo some logarithmic factor), and (2) can be computed in a data-driven manner without sample splitting. Noteworthily, the proposed estimators are nearly minimax optimal even when the associated eigen-gap is substantially smaller than what is required in prior theory.
The next-generation of wireless networks will enable many machine learning (ML) tools and applications to efficiently analyze various types of data collected by edge devices for inference, autonomy, and decision making purposes. However, due to resource constraints, delay limitations, and privacy challenges, edge devices cannot offload their entire collected datasets to a cloud server for centrally training their ML models or inference purposes. To overcome these challenges, distributed learning and inference techniques have been proposed as a means to enable edge devices to collaboratively train ML models without raw data exchanges, thus reducing the communication overhead and latency as well as improving data privacy. However, deploying distributed learning over wireless networks faces several challenges including the uncertain wireless environment, limited wireless resources (e.g., transmit power and radio spectrum), and hardware resources. This paper provides a comprehensive study of how distributed learning can be efficiently and effectively deployed over wireless edge networks. We present a detailed overview of several emerging distributed learning paradigms, including federated learning, federated distillation, distributed inference, and multi-agent reinforcement learning. For each learning framework, we first introduce the motivation for deploying it over wireless networks. Then, we present a detailed literature review on the use of communication techniques for its efficient deployment. We then introduce an illustrative example to show how to optimize wireless networks to improve its performance. Finally, we introduce future research opportunities. In a nutshell, this paper provides a holistic set of guidelines on how to deploy a broad range of distributed learning frameworks over real-world wireless communication networks.
Consider a channel ${\bf Y}={\bf X}+ {\bf N}$ where ${\bf X}$ is an $n$-dimensional random vector, and ${\bf N}$ is a Gaussian vector with a covariance matrix ${\bf \mathsf{K}}_{\bf N}$. The object under consideration in this paper is the conditional mean of ${\bf X}$ given ${\bf Y}={\bf y}$, that is ${\bf y} \to E[{\bf X}|{\bf Y}={\bf y}]$. Several identities in the literature connect $E[{\bf X}|{\bf Y}={\bf y}]$ to other quantities such as the conditional variance, score functions, and higher-order conditional moments. The objective of this paper is to provide a unifying view of these identities. In the first part of the paper, a general derivative identity for the conditional mean is derived. Specifically, for the Markov chain ${\bf U} \leftrightarrow {\bf X} \leftrightarrow {\bf Y}$, it is shown that the Jacobian of $E[{\bf U}|{\bf Y}={\bf y}]$ is given by ${\bf \mathsf{K}}_{{\bf N}}^{-1} {\bf Cov} ( {\bf X}, {\bf U} | {\bf Y}={\bf y})$. In the second part of the paper, via various choices of ${\bf U}$, the new identity is used to generalize many of the known identities and derive some new ones. First, a simple proof of the Hatsel and Nolte identity for the conditional variance is shown. Second, a simple proof of the recursive identity due to Jaffer is provided. Third, a new connection between the conditional cumulants and the conditional expectation is shown. In particular, it is shown that the $k$-th derivative of $E[X|Y=y]$ is the $(k+1)$-th conditional cumulant. The third part of the paper considers some applications. In a first application, the power series and the compositional inverse of $E[X|Y=y]$ are derived. In a second application, the distribution of the estimator error $(X-E[X|Y])$ is derived. In a third application, we construct consistent estimators (empirical Bayes estimators) of the conditional cumulants from an i.i.d. sequence $Y_1,...,Y_n$.
Mobile edge computing (MEC) has been envisioned as a promising paradigm to handle the massive volume of data generated from ubiquitous mobile devices for enabling intelligent services with the help of artificial intelligence (AI). Traditionally, AI techniques often require centralized data collection and training in a single entity, e.g., an MEC server, which is now becoming a weak point due to data privacy concerns and high data communication overheads. In this context, federated learning (FL) has been proposed to provide collaborative data training solutions, by coordinating multiple mobile devices to train a shared AI model without exposing their data, which enjoys considerable privacy enhancement. To improve the security and scalability of FL implementation, blockchain as a ledger technology is attractive for realizing decentralized FL training without the need for any central server. Particularly, the integration of FL and blockchain leads to a new paradigm, called FLchain, which potentially transforms intelligent MEC networks into decentralized, secure, and privacy-enhancing systems. This article presents an overview of the fundamental concepts and explores the opportunities of FLchain in MEC networks. We identify several main topics in FLchain design, including communication cost, resource allocation, incentive mechanism, security and privacy protection. The key solutions for FLchain design are provided, and the lessons learned as well as the outlooks are also discussed. Then, we investigate the applications of FLchain in popular MEC domains, such as edge data sharing, edge content caching and edge crowdsensing. Finally, important research challenges and future directions are also highlighted.
In this paper, the problem of minimizing the weighted sum of age of information (AoI) and total energy consumption of Internet of Things (IoT) devices is studied. In the considered model, each IoT device monitors a physical process that follows nonlinear dynamics. As the dynamics of the physical process vary over time, each device must find an optimal sampling frequency to sample the real-time dynamics of the physical system and send sampled information to a base station (BS). Due to limited wireless resources, the BS can only select a subset of devices to transmit their sampled information. Meanwhile, changing the sampling frequency will also impact the energy used by each device for sampling and information transmission. Thus, it is necessary to jointly optimize the sampling policy of each device and the device selection scheme of the BS so as to accurately monitor the dynamics of the physical process using minimum energy. This problem is formulated as an optimization problem whose goal is to minimize the weighted sum of AoI cost and energy consumption. To solve this problem, a distributed reinforcement learning approach is proposed to optimize the sampling policy. The proposed learning method enables the IoT devices to find the optimal sampling policy using their local observations. Given the sampling policy, the device selection scheme can be optimized so as to minimize the weighted sum of AoI and energy consumption of all devices. Simulations with real data of PM 2.5 pollution show that the proposed algorithm can reduce the sum of AoI by up to 17.8% and 33.9% and the total energy consumption by up to 13.2% and 35.1%, compared to a conventional deep Q network method and a uniform sampling policy.
The dramatic success of deep learning is largely due to the availability of data. Data samples are often acquired on edge devices, such as smart phones, vehicles and sensors, and in some cases cannot be shared due to privacy considerations. Federated learning is an emerging machine learning paradigm for training models across multiple edge devices holding local datasets, without explicitly exchanging the data. Learning in a federated manner differs from conventional centralized machine learning, and poses several core unique challenges and requirements, which are closely related to classical problems studied in the areas of signal processing and communications. Consequently, dedicated schemes derived from these areas are expected to play an important role in the success of federated learning and the transition of deep learning from the domain of centralized servers to mobile edge devices. In this article, we provide a unified systematic framework for federated learning in a manner that encapsulates and highlights the main challenges that are natural to treat using signal processing tools. We present a formulation for the federated learning paradigm from a signal processing perspective, and survey a set of candidate approaches for tackling its unique challenges. We further provide guidelines for the design and adaptation of signal processing and communication methods to facilitate federated learning at large scale.