As Part II of a three-part tutorial on holographic multiple-input multiple-output (HMIMO), this Letter focuses on the state-of-the-art in performance analysis and on holographic beamforming for HMIMO communications. We commence by discussing the spatial degrees of freedom (DoF) and ergodic capacity of a point-to-point HMIMO system, based on the channel model presented in Part I. Additionally, we also consider the sum-rate analysis of multi-user HMIMO systems. Moreover, we review the recent progress in holographic beamforming techniques developed for various HMIMO scenarios. Finally, we evaluate both the spatial DoF and the channel capacity through numerical simulations.
By integrating a nearly infinite number of reconfigurable elements into a finite space, a spatially continuous array aperture is formed for holographic multiple-input multiple-output (HMIMO) communications. This three-part tutorial aims for providing an overview of the latest advances in HMIMO communications. As Part I of the tutorial, this letter first introduces the fundamental concept of HMIMO and reviews the recent progress in HMIMO channel modeling, followed by a suite of efficient channel estimation approaches. Finally, numerical results are provided for demonstrating the statistical consistency of the new HMIMO channel model advocated with conventional ones and evaluating the performance of the channel estimators. Parts II and III of the tutorial will delve into the performance analysis and holographic beamforming, and detail the interplay of HMIMO with emerging technologies.
Hybrid transceivers are designed for linear decentralized estimation (LDE) in a mmWave multiple-input multiple-output (MIMO) IoT network (IoTNe). For a noiseless fusion center (FC), it is demonstrated that the MSE performance is determined by the number of RF chains used at each IoT node (IoTNo). Next, the minimum-MSE RF transmit precoders (TPCs) and receive combiner (RC) matrices are designed for this setup using the dominant array response vectors, and subsequently, a closed-form expression is obtained for the baseband (BB) TPC at each IoTNo using Cauchy's interlacing theorem. For a realistic noisy FC, it is shown that the resultant mean squared error (MSE) minimization problem is non-convex. To address this challenge, a block-coordinate descent-based iterative scheme is proposed to obtain the fully digital TPC and RC matrices followed by the simultaneous orthogonal matching pursuit (SOMP) technique for decomposing the fully-digital transceiver into its corresponding RF and BB components. A theoretical proof of the convergence is also presented for the proposed iterative design procedure. Furthermore, robust hybrid transceiver designs are also derived for a practical scenario in the face of channel state information (CSI) uncertainty. The centralized MMSE lower bound has also been derived that benchmarks the performance of the proposed LDE schemes. Finally, our numerical results characterize the performance of the proposed transceivers as well as corroborate our various analytical propositions.
A spatial modulation-aided orthogonal time frequency space (SM-OTFS) scheme is proposed for high-Doppler scenarios, which relies on a low-complexity distance-based detection algorithm. We first derive the delay-Doppler (DD) domain input-output relationship of our SM-OTFS system by exploiting an SM mapper, followed by characterizing the doubly-selective channels considered. Then we propose a distance-based ordering subspace check detector (DOSCD) exploiting the \emph{a priori} information of the transmit symbol vector. Moreover, we derive the discrete-input continuous-output memoryless channel (DCMC) capacity of the system. Finally, our simulation results demonstrate that the proposed SM-OTFS system outperforms the conventional single-input-multiple-output (SIMO)-OTFS system, and that the DOSCD conceived is capable of striking an attractive bit error ratio (BER) vs. complexity trade-off.
As an evolving successor to the mobile Internet, the Metaverse creates the impression of an immersive environment, integrating the virtual as well as the real world. In contrast to the traditional mobile Internet based on servers, the Metaverse is constructed by billions of cooperating users by harnessing their smart edge devices having limited communication and computation resources. In this immersive environment an unprecedented amount of multi-modal data has to be processed. To circumvent this impending bottleneck, low-rate semantic communication might be harnessed in support of the Metaverse. But given that private multi-modal data is exchanged in the Metaverse, we have to guard against security breaches and privacy invasions. Hence we conceive a trust-worthy semantic communication system for the Metaverse based on a federated learning architecture by exploiting its distributed decision-making and privacy-preserving capability. We conclude by identifying a suite of promising research directions and open issues.
The revolutionary technology of \emph{Stacked Intelligent Metasurfaces (SIM)} has been recently shown to be capable of carrying out advanced signal processing directly in the native electromagnetic (EM) wave domain. An SIM is fabricated by a sophisticated amalgam of multiple stacked metasurface layers, which may outperform its single-layer metasurface counterparts, such as reconfigurable intelligent surfaces (RISd) and metasurface lenses. We harness this new SIM concept for implementing efficient holographic multiple-input multiple-output (HMIMO) communications that dot require excessive radio-frequency (RF) chains, which constitutes a substantial benefit compared to existing implementations. We first present an HMIMO communication system based on a pair of SIMs at the transmitter (TX) and receiver (RX), respectively. In sharp contrast to the conventional MIMO designs, the considered SIMs are capable of automatically accomplishing transmit precoding and receiver combining, as the EM waves propagate through them. As such, each information data stream can be directly radiated and recovered from the corresponding transmit and receive ports. Secondly, we formulate the problem of minimizing the error between the actual end-to-end SIMs'parametrized channel matrix and the target diagonal one, with the latter representing a flawless interference-free system of parallel subchannels. This is achieved by jointly optimizing the phase shifts associated with all the metasurface layers of both the TX-SIM and RX-SIM. We then design a gradient descent algorithm to solve the resultant non-convex problem. Furthermore, we theoretically analyze the HMIMO channel capacity bound and provide some useful fundamental insights. Extensive simulation results are provided for characterizing our SIM-based HMIMO system, quantifying its substantial performance benefits.
It is anticipated that integrated sensing and communications (ISAC) would be one of the key enablers of next-generation wireless networks (such as beyond 5G (B5G) and 6G) for supporting a variety of emerging applications. In this paper, we provide a comprehensive review of the recent advances in ISAC systems, with a particular focus on their foundations, system design, networking aspects and ISAC applications. Furthermore, we discuss the corresponding open questions of the above that emerged in each issue. Hence, we commence with the information theory of sensing and communications (S$\&$C), followed by the information-theoretic limits of ISAC systems by shedding light on the fundamental performance metrics. Next, we discuss their clock synchronization and phase offset problems, the associated Pareto-optimal signaling strategies, as well as the associated super-resolution ISAC system design. Moreover, we envision that ISAC ushers in a paradigm shift for the future cellular networks relying on network sensing, transforming the classic cellular architecture, cross-layer resource management methods, and transmission protocols. In ISAC applications, we further highlight the security and privacy issues of wireless sensing. Finally, we close by studying the recent advances in a representative ISAC use case, namely the multi-object multi-task (MOMT) recognition problem using wireless signals.
Beamforming techniques have been widely used in the millimeter wave (mmWave) bands to mitigate the path loss of mmWave radio links as the narrow straight beams by directionally concentrating the signal energy. However, traditional mmWave beam management algorithms usually require excessive channel state information overhead, leading to extremely high computational and communication costs. This hinders the widespread deployment of mmWave communications. By contrast, the revolutionary vision-assisted beam management system concept employed at base stations (BSs) can select the optimal beam for the target user equipment (UE) based on its location information determined by machine learning (ML) algorithms applied to visual data, without requiring channel information. In this paper, we present a comprehensive framework for a vision-assisted mmWave beam management system, its typical deployment scenarios as well as the specifics of the framework. Then, some of the challenges faced by this system and their efficient solutions are discussed from the perspective of ML. Next, a new simulation platform is conceived to provide both visual and wireless data for model validation and performance evaluation. Our simulation results indicate that the vision-assisted beam management is indeed attractive for next-generation wireless systems.
We propose a high-performance yet low-complexity hierarchical frequency synchronization scheme for orthogonal frequency-division multiple-access (OFDMA) aided distributed massive multi-input multi-output (MIMO) systems, where multi-ple carrier frequency offsets (CFOs) have to be estimated in the uplink. To solve this multi-CFO estimation problem efficiently, we classify the active antenna units (AAUs) as the master and the slaves. Then, we split the scheme into two stages. During the first stage the distributed slave AAUs are synchronized with the master AAU, while the user equipment (UE) is synchronized with the closest slave AAU during the second stage. The mean square error (MSE) performance of our scheme is better than that of the representative state-of-the-art baseline schemes, while its computational complexity is substantially lower.
Wireless sensors are everywhere. To address their energy supply, we proposed an end-to-end design for polar-coded integrated data and energy networking (IDEN), where the conventional signal processing modules, such as modulation/demodulation and channel decoding, are replaced by deep neural networks (DNNs). Moreover, the input-output relationship of an energy harvester (EH) is also modelled by a DNN. By jointly optimizing both the transmitter and the receiver as an autoencoder (AE), we minimize the bit-error-rate (BER) and maximize the harvested energy of the IDEN system, while satisfying the transmit power budget constraint determined by the normalization layer in the transmitter. Our simulation results demonstrate that the DNN aided end-to-end design conceived outperforms its conventional model-based counterpart both in terms of the harvested energy and the BER.