Abstract:This paper proposes a transmit beamforming strategy for the integrated sensing and communication (ISAC) systems enabled by the novel stacked intelligent metasurface (SIM) architecture, where the base station (BS) simultaneously performs downlink communication and radar target detection via different beams. To ensure superior dual-function performance simultaneously, we design the multi-layer cascading beamformer by maximizing the sum rate of the users while optimally shaping the normalized beam pattern for detection. A dual-normalized differential gradient descent (D3) algorithm is further proposed to solve the resulting non-convex multi-objective problem (MOP), where gradient differences and dual normalization are employed to ensure a fair trade-off between communication and sensing objectives. Numerical results demonstrate the superiority of the proposed beamforming design in terms of balancing communication and sensing performance.
Abstract:Lately a new approach to Extended Reality (XR), denoted as XR-RF, has been proposed which is realized by combining Radio Frequency (RF) Imaging and programmable wireless environments (PWEs). RF Imaging is a technique that aims to detect geometric and material features of an object through RF waves. On the other hand, the PWE focuses on the the conversion of the wireless RF propagation in a controllable, by software, entity through the utilization of Reconfigurable Intelligent Surfaces (RISs), which can have a controllable interaction with impinging RF waves. In that sense, this dynamic synergy leverages the potential of RF Imaging to detect the structure of an object through RF wavefronts and the PWE's ability to selectively replicate those RF wavefronts from one spatial location to wherever an XR-RF mobile user is presently located. Then the captured wavefront, through appropriate hardware, is mapped to the visual representation of the object through machine learning models. As a key aspect of the XR-RF's system workflow is the wavefront copying mechanism, this work introduces a new PWE configuration algorithm for XR-RF. Moreover, it is shown that the waveform replication process inevitably yields imprecision in the replication process. After statistical analysis, based on simulation results, it is shown that this imprecision can be effectively modeled by the gamma distribution.
Abstract:In response to the increasing number of devices anticipated in next-generation networks, a shift toward over-the-air (OTA) computing has been proposed. Leveraging the superposition of multiple access channels, OTA computing enables efficient resource management by supporting simultaneous uncoded transmission in the time and the frequency domain. Thus, to advance the integration of OTA computing, our study presents a theoretical analysis addressing practical issues encountered in current digital communication transceivers, such as time sampling error and intersymbol interference (ISI). To this end, we examine the theoretical mean squared error (MSE) for OTA transmission under time sampling error and ISI, while also exploring methods for minimizing the MSE in the OTA transmission. Utilizing alternating optimization, we also derive optimal power policies for both the devices and the base station. Additionally, we propose a novel deep neural network (DNN)-based approach to design waveforms enhancing OTA transmission performance under time sampling error and ISI. To ensure fair comparison with existing waveforms like the raised cosine (RC) and the better-than-raised-cosine (BRTC), we incorporate a custom loss function integrating energy and bandwidth constraints, along with practical design considerations such as waveform symmetry. Simulation results validate our theoretical analysis and demonstrate performance gains of the designed pulse over RC and BTRC waveforms. To facilitate testing of our results without necessitating the DNN structure recreation, we provide curve fitting parameters for select DNN-based waveforms as well.
Abstract:Near-space airship-borne communication network is recognized to be an indispensable component of the future integrated ground-air-space network thanks to airships' advantage of long-term residency at stratospheric altitudes, but it urgently needs reliable and efficient Airship-to-X link. To improve the transmission efficiency and capacity, this paper proposes to integrate semantic communication with massive multiple-input multiple-output (MIMO) technology. Specifically, we propose a deep joint semantic coding and beamforming (JSCBF) scheme for airship-based massive MIMO image transmission network in space, in which semantics from both source and channel are fused to jointly design the semantic coding and physical layer beamforming. First, we design two semantic extraction networks to extract semantics from image source and channel state information, respectively. Then, we propose a semantic fusion network that can fuse these semantics into complex-valued semantic features for subsequent physical-layer transmission. To efficiently transmit the fused semantic features at the physical layer, we then propose the hybrid data and model-driven semantic-aware beamforming networks. At the receiver, a semantic decoding network is designed to reconstruct the transmitted images. Finally, we perform end-to-end deep learning to jointly train all the modules, using the image reconstruction quality at the receivers as a metric. The proposed deep JSCBF scheme fully combines the efficient source compressibility and robust error correction capability of semantic communication with the high spectral efficiency of massive MIMO, achieving a significant performance improvement over existing approaches.
Abstract:Federated learning (FL) is a decentralized learning technique that enables participating devices to collaboratively build a shared Machine Leaning (ML) or Deep Learning (DL) model without revealing their raw data to a third party. Due to its privacy-preserving nature, FL has sparked widespread attention for building Intrusion Detection Systems (IDS) within the realm of cybersecurity. However, the data heterogeneity across participating domains and entities presents significant challenges for the reliable implementation of an FL-based IDS. In this paper, we propose an effective method called Statistical Averaging (StatAvg) to alleviate non-independently and identically (non-iid) distributed features across local clients' data in FL. In particular, StatAvg allows the FL clients to share their individual data statistics with the server, which then aggregates this information to produce global statistics. The latter are shared with the clients and used for universal data normalisation. It is worth mentioning that StatAvg can seamlessly integrate with any FL aggregation strategy, as it occurs before the actual FL training process. The proposed method is evaluated against baseline approaches using datasets for network and host Artificial Intelligence (AI)-powered IDS. The experimental results demonstrate the efficiency of StatAvg in mitigating non-iid feature distributions across the FL clients compared to the baseline methods.
Abstract:In the evolving landscape of sixth-generation (6G) wireless networks, which demand ultra high data rates, this study introduces the concept of super constellation communications. Also, we present super amplitude phase shift keying (SAPSK), an innovative modulation technique designed to achieve these ultra high data rate demands. SAPSK is complemented by the generalized polar distance detector (GPD-D), which approximates the optimal maximum likelihood detector in channels with Gaussian phase noise (GPN). By leveraging the decision regions formulated by GPD-D, a tight closed-form approximation for the symbol error probability (SEP) of SAPSK constellations is derived, while a detection algorithm with O(1) time complexity is developed to ensure fast and efficient SAPSK symbol detection. Finally, the theoretical performance of SAPSK and the efficiency of the proposed O(1) algorithm are validated by numerical simulations, highlighting both its superiority in terms of SEP compared to various constellations and its practical advantages in terms of fast and accurate symbol detection.
Abstract:Active reconfigurable intelligent surface (RIS) has attracted significant attention as a recently proposed RIS architecture. Owing to its capability to amplify the incident signals, active RIS can mitigate the multiplicative fading effect inherent in the passive RIS-aided system. In this paper, we consider an active RIS-aided uplink multi-user massive multiple-input multiple-output (MIMO) system in the presence of phase noise at the active RIS. Specifically, we employ a two-timescale scheme, where the beamforming at the base station (BS) is adjusted based on the instantaneous aggregated channel state information (CSI) and the statistical CSI serves as the basis for designing the phase shifts at the active RIS, so that the feedback overhead and computational complexity can be significantly reduced. The aggregated channel composed of the cascaded and direct channels is estimated by utilizing the linear minimum mean square error (LMMSE) technique. Based on the estimated channel, we derive the analytical closed-form expression of a lower bound of the achievable rate. The power scaling laws in the active RIS-aided system are investigated based on the theoretical expressions. When the transmit power of each user is scaled down by the number of BS antennas M or reflecting elements N, we find that the thermal noise will cause the lower bound of the achievable rate to approach zero, as the number of M or N increases to infinity. Moreover, an optimization approach based on genetic algorithms (GA) is introduced to tackle the phase shift optimization problem. Numerical results reveal that the active RIS can greatly enhance the performance of the considered system under various settings.
Abstract:Recent advancements in sensing, measurement, and computing technologies have significantly expanded the potential for signal-based applications, leveraging the synergy between signal processing and Machine Learning (ML) to improve both performance and reliability. This fusion represents a critical point in the evolution of signal-based systems, highlighting the need to bridge the existing knowledge gap between these two interdisciplinary fields. Despite many attempts in the existing literature to bridge this gap, most are limited to specific applications and focus mainly on feature extraction, often assuming extensive prior knowledge in signal processing. This assumption creates a significant obstacle for a wide range of readers. To address these challenges, this paper takes an integrated article approach. It begins with a detailed tutorial on the fundamentals of signal processing, providing the reader with the necessary background knowledge. Following this, it explores the key stages of a standard signal processing-based ML pipeline, offering an in-depth review of feature extraction techniques, their inherent challenges, and solutions. Differing from existing literature, this work offers an application-independent review and introduces a novel classification taxonomy for feature extraction techniques. Furthermore, it aims at linking theoretical concepts with practical applications, and demonstrates this through two specific use cases: a spectral-based method for condition monitoring of rolling bearings and a wavelet energy analysis for epilepsy detection using EEG signals. In addition to theoretical contributions, this work promotes a collaborative research culture by providing a public repository of relevant Python and MATLAB signal processing codes. This effort is intended to support collaborative research efforts and ensure the reproducibility of the results presented.
Abstract:In this paper, we consider the downlink transmission of a multi-antenna base station (BS) supported by an active simultaneously transmitting and reconfigurable intelligent surface (STAR-RIS) to serve single-antenna users via simultaneous wireless information and power transfer (SWIPT). In this context, we formulate an energy efficiency maximisation problem that jointly optimises the gain, element selection and phase shift matrices of the active STAR-RIS, the transmit beamforming of the BS and the power splitting ratio of the users. With respect to the highly coupled and non-convex form of this problem, an alternating optimisation solution approach is proposed, using tools from convex optimisation and reinforcement learning. Specifically, semi-definite relaxation (SDR), difference of concave functions (DC), and fractional programming techniques are employed to transform the non-convex optimisation problem into a convex form for optimising the BS beamforming vector and the power splitting ratio of the SWIPT. Then, by integrating meta-learning with the modified deep deterministic policy gradient (DDPG) and soft actor-critical (SAC) methods, a combinatorial reinforcement learning network is developed to optimise the element selection, gain and phase shift matrices of the active STAR-RIS. Our simulations show the effectiveness of the proposed resource allocation scheme. Furthermore, our proposed active STAR-RIS-based SWIPT system outperforms its passive counterpart by 57% on average.
Abstract:Optical wireless communication (OWC) systems with multiple light-emitting diodes (LEDs) have recently been explored to support energy-limited devices via simultaneous lightwave information and power transfer (SLIPT). The energy consumption, however, becomes considerable by increasing the number of incorporated LEDs. This paper proposes a joint dimming (JD) scheme that lowers the consumed power of a SLIPT-enabled OWC system by controlling the number of active LEDs. We further enhance the data rate of this system by utilizing rate splitting multiple access (RSMA). More specifically, we formulate a data rate maximization problem to optimize the beamforming design, LED selection and RSMA rate adaptation that guarantees the power budget of the OWC transmitter, as well as the quality-of-service (QoS) and an energy harvesting level for users. We propose a dynamic resource allocation solution based on proximal policy optimization (PPO) reinforcement learning. In simulations, the optimal dimming level is determined to initiate a trade-off between the data rate and power consumption. It is also verified that RSMA significantly improves the data rate.