The synergy of metasurface-based holographic surfaces (HoloS) and reconfigurable intelligent surfaces (RIS) is considered a key aspect for future communication networks. However, the optimization of dynamic metasurfaces requires the use of numerical algorithms, for example, based on the singular value decomposition (SVD) and gradient descent methods, which are usually computationally intensive, especially when the number of elements is large. In this paper, we analyze low complexity designs for RIS-aided HoloS communication systems, in which the configurations of the HoloS transmitter and the RIS are given in a closed-form expression. We consider implementations based on diagonal and non-diagonal RISs. Over line-of-sight channels, we show that the proposed schemes provide performance that is close to that offered by complex numerical methods.
The ability of reconfigurable intelligent surfaces (RIS) to produce complex radiation patterns in the far-field is determined by various factors, such as the unit-cell's size, shape, spatial arrangement, tuning mechanism, the communication and control circuitry's complexity, and the illuminating source's type (point/planewave). Research on RIS has been mainly focused on two areas: first, the optimization and design of unit-cells to achieve desired electromagnetic responses within a specific frequency band; and second, exploring the applications of RIS in various settings, including system-level performance analysis. The former does not assume any specific radiation pattern on the surface level, while the latter does not consider any particular unit-cell design. Both approaches largely ignore the complexity and power requirements of the RIS control circuitry. As we progress towards the fabrication and use of RIS in real-world settings, it is becoming increasingly necessary to consider the interplay between the unit-cell design, the required surface-level radiation patterns, the control circuit's complexity, and the power requirements concurrently. In this paper, a benchmarking framework for RIS is employed to compare performance and analyze tradeoffs between the unit-cell's specified radiation patterns and the control circuit's complexity for far-field beamforming, considering different diode-based unit-cell designs for a given surface size. This work lays the foundation for optimizing the design of the unit-cells and surface-level radiation patterns, facilitating the optimization of RIS-assisted wireless communication systems.
This work studies non-cooperative Multi-Agent Reinforcement Learning (MARL) where multiple agents interact in the same environment and whose goal is to maximize the individual returns. Challenges arise when scaling up the number of agents due to the resultant non-stationarity that the many agents introduce. In order to address this issue, Mean Field Games (MFG) rely on the symmetry and homogeneity assumptions to approximate games with very large populations. Recently, deep Reinforcement Learning has been used to scale MFG to games with larger number of states. Current methods rely on smoothing techniques such as averaging the q-values or the updates on the mean-field distribution. This work presents a different approach to stabilize the learning based on proximal updates on the mean-field policy. We name our algorithm \textit{Mean Field Proximal Policy Optimization (MF-PPO)}, and we empirically show the effectiveness of our method in the OpenSpiel framework.
Federated edge learning can be essential in supporting privacy-preserving, artificial intelligence (AI)-enabled activities in digital twin 6G-enabled Internet of Things (IoT) environments. However, we need to also consider the potential of attacks targeting the underlying AI systems (e.g., adversaries seek to corrupt data on the IoT devices during local updates or corrupt the model updates); hence, in this article, we propose an anticipatory study for poisoning attacks in federated edge learning for digital twin 6G-enabled IoT environments. Specifically, we study the influence of adversaries on the training and development of federated learning models in digital twin 6G-enabled IoT environments. We demonstrate that attackers can carry out poisoning attacks in two different learning settings, namely: centralized learning and federated learning, and successful attacks can severely reduce the model's accuracy. We comprehensively evaluate the attacks on a new cyber security dataset designed for IoT applications with three deep neural networks under the non-independent and identically distributed (Non-IID) data and the independent and identically distributed (IID) data. The poisoning attacks, on an attack classification problem, can lead to a decrease in accuracy from 94.93% to 85.98% with IID data and from 94.18% to 30.04% with Non-IID.
The development of the Internet of Things (IoT) has dramatically expanded our daily lives, playing a pivotal role in the enablement of smart cities, healthcare, and buildings. Emerging technologies, such as IoT, seek to improve the quality of service in cognitive cities. Although IoT applications are helpful in smart building applications, they present a real risk as the large number of interconnected devices in those buildings, using heterogeneous networks, increases the number of potential IoT attacks. IoT applications can collect and transfer sensitive data. Therefore, it is necessary to develop new methods to detect hacked IoT devices. This paper proposes a Feature Selection (FS) model based on Harris Hawks Optimization (HHO) and Random Weight Network (RWN) to detect IoT botnet attacks launched from compromised IoT devices. Distributed Machine Learning (DML) aims to train models locally on edge devices without sharing data to a central server. Therefore, we apply the proposed approach using centralized and distributed ML models. Both learning models are evaluated under two benchmark datasets for IoT botnet attacks and compared with other well-known classification techniques using different evaluation indicators. The experimental results show an improvement in terms of accuracy, precision, recall, and F-measure in most cases. The proposed method achieves an average F-measure up to 99.9\%. The results show that the DML model achieves competitive performance against centralized ML while maintaining the data locally.
The union of Edge Computing (EC) and Artificial Intelligence (AI) has brought forward the Edge AI concept to provide intelligent solutions close to end-user environment, for privacy preservation, low latency to real-time performance, as well as resource optimization. Machine Learning (ML), as the most advanced branch of AI in the past few years, has shown encouraging results and applications in the edge environment. Nevertheless, edge powered ML solutions are more complex to realize due to the joint constraints from both edge computing and AI domains, and the corresponding solutions are expected to be efficient and adapted in technologies such as data processing, model compression, distributed inference, and advanced learning paradigms for Edge ML requirements. Despite that a great attention of Edge ML is gained in both academic and industrial communities, we noticed the lack of a complete survey on existing Edge ML technologies to provide a common understanding of this concept. To tackle this, this paper aims at providing a comprehensive taxonomy and a systematic review of Edge ML techniques: we start by identifying the Edge ML requirements driven by the joint constraints. We then survey more than twenty paradigms and techniques along with their representative work, covering two main parts: edge inference, and edge learning. In particular, we analyze how each technique fits into Edge ML by meeting a subset of the identified requirements. We also summarize Edge ML open issues to shed light on future directions for Edge ML.
Reconfigurable intelligent surface has recently emerged as a promising technology for shaping the wireless environment by leveraging massive low-cost reconfigurable elements. Prior works mainly focus on a single-layer metasurface that lacks the capability of suppressing multiuser interference. By contrast, we propose a stacked intelligent metasurface (SIM)-enabled transceiver design for multiuser multiple-input single-output downlink communications. Specifically, the SIM is endowed with a multilayer structure and is deployed at the base station to perform transmit beamforming directly in the electromagnetic wave domain. As a result, an SIM-enabled transceiver overcomes the need for digital beamforming and operates with low-resolution digital-to-analog converters and a moderate number of radio-frequency chains, which significantly reduces the hardware cost and energy consumption, while substantially decreasing the precoding delay benefiting from the processing performed in the wave domain. To leverage the benefits of SIM-enabled transceivers, we formulate an optimization problem for maximizing the sum rate of all the users by jointly designing the transmit power allocated to them and the analog beamforming in the wave domain. Numerical results based on a customized alternating optimization algorithm corroborate the effectiveness of the proposed SIM-enabled analog beamforming design as compared with various benchmark schemes. Most notably, the proposed analog beamforming scheme is capable of substantially decreasing the precoding delay compared to its digital counterpart.
This paper tackles the problem of recovering a low-rank signal tensor with possibly correlated components from a random noisy tensor, or so-called spiked tensor model. When the underlying components are orthogonal, they can be recovered efficiently using tensor deflation which consists of successive rank-one approximations, while non-orthogonal components may alter the tensor deflation mechanism, thereby preventing efficient recovery. Relying on recently developed random tensor tools, this paper deals precisely with the non-orthogonal case by deriving an asymptotic analysis of a parameterized deflation procedure performed on an order-three and rank-two spiked tensor. Based on this analysis, an efficient tensor deflation algorithm is proposed by optimizing the parameter introduced in the deflation mechanism, which in turn is proven to be optimal by construction for the studied tensor model. The same ideas could be extended to more general low-rank tensor models, e.g., higher ranks and orders, leading to more efficient tensor methods with a broader impact on machine learning and beyond.
This paper studies the exploitation of triple polarization (TP) for multi-user (MU) holographic multiple-input multiple-output surface (HMIMOS) wireless communication systems, aiming at capacity boosting without enlarging the antenna array size. We specifically consider that both the transmitter and receiver are equipped with an HMIMOS comprising compact sub-wavelength TP patch antennas. To characterize TP MUHMIMOS systems, a TP near-field channel model is proposed using the dyadic Green's function, whose characteristics are leveraged to design a user-cluster-based precoding scheme for mitigating the cross-polarization and inter-user interference contributions. A theoretical correlation analysis for HMIMOS with infinitely small patch antennas is also presented. According to the proposed scheme, the users are assigned to one of the three polarizations, which is easy to implement, at the cost, however, of reducing the system's diversity. Our numerical results showcase that the cross-polarization channel components have a nonnegligible impact on the system performance, which is efficiently eliminated with the proposed MU precoding scheme.
Reconfigurable intelligent surface (RIS) is considered as a promising solution for next-generation wireless communication networks due to a variety of merits, e.g., customizing the communication environment. Therefore, deploying multiple RISs helps overcome severe signal blocking between the base station (BS) and users, which is also a practical and effective solution to achieve better service coverage. However, reaping the full benefits of a multi-RISs aided communication system requires solving a non-convex, infinite-dimensional optimization problem, which motivates the use of learning-based methods to configure the optimal policy. This paper adopts a novel heterogeneous graph neural network (GNN) to effectively exploit the graph topology in the wireless communication optimization problem. First, we characterize all communication link features and interference relations in our system with a heterogeneous graph structure. Then, we endeavor to maximize the weighted sum rate (WSR) of all users by jointly optimizing the active beamforming at the BS, the passive beamforming vector of the RIS elements, as well as the RISs association strategy. Unlike most existing work, we consider a more general scenario where the cascaded link for each user is not fixed but dynamically selected by maximizing the WSR. Simulation results show that our proposed heterogeneous GNNs perform about 10 times better than other benchmarks, and a suitable RISs association strategy is also validated to be effective in improving the quality services of users by 30%.