This work investigates the application of Beyond Diagonal Intelligent Reflective Surface (BD-IRS) to enhance THz downlink communication systems, operating in a hybrid: reflective and transmissive mode, to simultaneously provide services to indoor and outdoor users. We propose an optimization framework that jointly optimizes the beamforming vectors and phase shifts in the hybrid reflective/transmissive mode, aiming to maximize the system sum rate. To tackle the challenges in solving the joint design problem, we employ the conjugate gradient method and propose an iterative algorithm that successively optimizes the hybrid beamforming vectors and the phase shifts. Through comprehensive numerical simulations, our findings demonstrate a significant improvement in rate when compared to existing benchmark schemes, including time- and frequency-divided approaches, by approximately $30.5\%$ and $70.28\%$ respectively. This underscores the significant influence of IRS elements on system performance relative to that of base station antennas, highlighting their pivotal role in advancing the communication system efficacy.
This paper introduces a joint optimization framework for user-centric beam selection and linear precoding (LP) design in a coordinated multiple-satellite (CoMSat) system, employing a Digital-Fourier-Transform-based (DFT) beamforming (BF) technique. Regarding serving users at their target SINRs and minimizing the total transmit power, the scheme aims to efficiently determine satellites for users to associate with and activate the best cluster of beams together with optimizing LP for every satellite-to-user transmission. These technical objectives are first framed as a complex mixed-integer programming (MIP) challenge. To tackle this, we reformulate it into a joint cluster association and LP design problem. Then, by theoretically analyzing the duality relationship between downlink and uplink transmissions, we develop an efficient iterative method to identify the optimal solution. Additionally, a simpler duality approach for rapid beam selection and LP design is presented for comparison purposes. Simulation results underscore the effectiveness of our proposed schemes across various settings.
Although reconfigurable intelligent surface (RIS) is a promising technology for shaping the propagation environment, it consists of a single-layer structure within inherent limitations regarding the number of beam steering patterns. Based on the recently revolutionary technology, denoted as stacked intelligent metasurface (SIM), we propose its implementation not only on the base station (BS) side in a massive multiple-input multiple-output (mMIMO) setup but also in the intermediate space between the base station and the users to adjust the environment further as needed. For the sake of convenience, we call the former BS SIM (BSIM), and the latter channel SIM (CSIM). Hence, we achieve wave-based combining at the BS and wave-based configuration at the intermediate space. Specifically, we propose a channel estimation method with reduced overhead, being crucial for SIMassisted communications. Next, we derive the uplink sum spectral efficiency (SE) in closed form in terms of statistical channel state information (CSI). Notably, we optimize the phase shifts of both BSIM and CSIM simultaneously by using the projected gradient ascent method (PGAM). Compared to previous works on SIMs, we study the uplink transmission, a mMIMO setup, channel estimation in a single phase, a second SIM at the intermediate space, and simultaneous optimization of the two SIMs. Simulation results show the impact of various parameters on the sum SE, and demonstrate the superiority of our optimization approach compared to the alternating optimization (AO) method.
The emerging reflecting intelligent surface (RIS) technology promises to enhance the capacity of wireless communication systems via passive reflect beamforming. However, the product path loss limits its performance gains. Fully-connected (FC) active RIS, which integrates reflect-type power amplifiers into the RIS elements, has been recently introduced in response to this issue. Also, sub-connected (SC) active RIS and hybrid FC-active/passive RIS variants, which employ a limited number of reflect-type power amplifiers, have been proposed to provide energy savings. Nevertheless, their flexibility in balancing diverse capacity requirements and power consumption constraints is limited. In this direction, this study introduces novel hybrid RIS structures, wherein at least one reflecting sub-surface (RS) adopts the SC-active RIS design. The asymptotic signal-to-noise-ratio of the FC-active/passive and the proposed hybrid RIS variants is analyzed in a single-user single-input single-output setup. Furthermore, the transmit and RIS beamforming weights are jointly optimized in each scenario to maximize the energy efficiency of a hybrid RIS-aided multi-user multiple-input single-output downlink system subject to the power consumption constraints of the base station and the active RSs. Numerical simulation and analytic results highlight the performance gains of the proposed RIS designs over benchmarks, unveil non-trivial trade-offs, and provide valuable insights.
Simultaneously transmitting and reflecting \textcolor{black}{reconfigurable intelligent surface} (STAR-RIS) is a promising implementation of RIS-assisted systems that enables full-space coverage. However, STAR-RIS as well as conventional RIS suffer from the double-fading effect. Thus, in this paper, we propose the marriage of active RIS and STAR-RIS, denoted as ASTARS for massive multiple-input multiple-output (mMIMO) systems, and we focus on the energy splitting (ES) and mode switching (MS) protocols. Compared to prior literature, we consider the impact of correlated fading, and we rely our analysis on the two timescale protocol, being dependent on statistical channel state information (CSI). On this ground, we propose a channel estimation method for ASTARS with reduced overhead that accounts for its architecture. Next, we derive a \textcolor{black}{closed-form expression} for the achievable sum-rate for both types of users in the transmission and reflection regions in a unified approach with significant practical advantages such as reduced complexity and overhead, which result in a lower number of required iterations for convergence compared to an alternating optimization (AO) approach. Notably, we maximize simultaneously the amplitudes, the phase shifts, and the active amplifying coefficients of the ASTARS by applying the projected gradient ascent method (PGAM). Remarkably, the proposed optimization can be executed at every several coherence intervals that reduces the processing burden considerably. Simulations corroborate the analytical results, provide insight into the effects of fundamental variables on the sum achievable SE, and present the superiority of 16 ASTARS compared to passive STAR-RIS for a practical number of surface elements.
Accurate asset localization holds paramount importance across various industries, ranging from transportation management to search and rescue operations. In scenarios where traditional positioning equations cannot be adequately solved due to limited measurements obtained by the receiver, the utilization of Non-Terrestrial Networks (NTN) based on Low Earth Orbit (LEO) satellites can prove pivotal for precise positioning. The decision to employ NTN in lieu of conventional Global Navigation Satellite Systems (GNSS) is rooted in two key factors. Firstly, GNSS systems are susceptible to jamming and spoofing attacks, thereby compromising their reliability, where LEO satellites link budgets can benefit from a closer distances and the new mega constellations could offer more satellites in view than GNSS. Secondly, 5G service providers seek to reduce dependence on third-party services. Presently, the NTN operation necessitates a GNSS receiver within the User Equipment (UE), placing the service provider at the mercy of GNSS reliability. Consequently, when GNSS signals are unavailable in certain regions, NTN services are also rendered inaccessible.
Spiking neural networks (SNNs) implemented on neuromorphic processors (NPs) can enhance the energy efficiency of deployments of artificial intelligence (AI) for specific workloads. As such, NP represents an interesting opportunity for implementing AI tasks on board power-limited satellite communication spacecraft. In this article, we disseminate the findings of a recently completed study which targeted the comparison in terms of performance and power-consumption of different satellite communication use cases implemented on standard AI accelerators and on NPs. In particular, the article describes three prominent use cases, namely payload resource optimization, onboard interference detection and classification, and dynamic receive beamforming; and compare the performance of conventional convolutional neural networks (CNNs) implemented on Xilinx's VCK5000 Versal development card and SNNs on Intel's neuromorphic chip Loihi 2.
Both space and ground communications have been proven effective solutions under different perspectives in Internet of Things (IoT) networks. This paper investigates multiple-access scenarios, where plenty of IoT users are cooperatively served by a satellite in space and access points (APs) on the ground. Available users in each coherence interval are split into scheduled and unscheduled subsets to optimize limited radio resources. We compute the uplink ergodic throughput of each scheduled user under imperfect channel state information (CSI) and non-orthogonal pilot signals. As maximum-radio combining is deployed locally at the ground gateway and the APs, the uplink ergodic throughput is obtained in a closed-form expression. The analytical results explicitly unveil the effects of channel conditions and pilot contamination on each scheduled user. By maximizing the sum throughput, the system can simultaneously determine scheduled users and perform power allocation based on either a model-based approach with alternating optimization or a learning-based approach with the graph neural network. Numerical results manifest that integrated satellite-terrestrial cell-free massive multiple-input multiple-output systems can significantly improve the sum ergodic throughput over coherence intervals. The integrated systems can schedule the vast majority of users; some might be out of service due to the limited power budget.
The regenerative capabilities of next-generation satellite systems offer a novel approach to design low earth orbit (LEO) satellite communication systems, enabling full flexibility in bandwidth and spot beam management, power control, and onboard data processing. These advancements allow the implementation of intelligent spatial multiplexing techniques, addressing the ever-increasing demand for future broadband data traffic. Existing satellite resource management solutions, however, do not fully exploit these capabilities. To address this issue, a novel framework called flexible resource management algorithm for LEO satellites (FLARE-LEO) is proposed to jointly design bandwidth, power, and spot beam coverage optimized for the geographic distribution of users. It incorporates multi-spot beam multicasting, spatial multiplexing, caching, and handover (HO). In particular, the spot beam coverage is optimized by using the unsupervised K-means algorithm applied to the realistic geographical user demands, followed by a proposed successive convex approximation (SCA)-based iterative algorithm for optimizing the radio resources. Furthermore, we propose two joint transmission architectures during the HO period, which jointly estimate the downlink channel state information (CSI) using deep learning and optimize the transmit power of the LEOs involved in the HO process to improve the overall system throughput. Simulations demonstrate superior performance in terms of delivery time reduction of the proposed algorithm over the existing solutions.
Satellite swarms have recently gained attention in the space industry due to their ability to provide extremely narrow beamwidths at a lower cost than single satellite systems. This paper proposes a concept for a satellite swarm using a distributed subarray configuration based on a 2D normal probability distribution. The swarm comprises multiple small satellites acting as subarrays of a big aperture array limited by a radius of 20000 wavelengths working at a central frequency of 19 GHz. The main advantage of this approach is that the distributed subarrays can provide extremely directive beams and beamforming capabilities that are not possible using a conventional antenna and satellite design. The proposed swarm concept is analyzed, and the simulation results show that the radiation pattern achieves a beamwidth as narrow as 0.0015-degrees with a maximum side lobe level of 18.8 dB and a grating lobe level of 14.8 dB. This concept can be used for high data rates applications or emergency systems.