Distributed massive multiple-input multiple output (mMIMO) system for low earth orbit (LEO) satellite networks is introduced as a promising technique to provide broadband connectivity. Nevertheless, several challenges persist in implementing distributed mMIMO systems for LEO satellite networks. These challenges include providing scalable massive access implementation as the system complexity increases with network size. Another challenging issue is the asynchronous arrival of signals at the user terminals due to the different propagation delays among distributed antennas in space, which destroys the coherent transmission, and consequently degrades the system performance. In this paper, we propose a scalable distributed mMIMO system for LEO satellite networks based on dynamic user-centric clustering. Aiming to obtain scalable implementation, new algorithms for initial cooperative access, cluster selection, and cluster handover are provided. In addition, phase shift-aware precoding is implemented to compensate for the propagation delay phase shifts. The performance of the proposed user-centric distributed mMIMO is compared with two baseline configurations: the non-cooperative transmission systems, where each user connects to only a single satellite, and the full-cooperative distributed mMIMO systems, where all satellites contribute serving each user. The numerical results show the potential of the proposed distributed mMIMO system to enhance system spectral efficiency when compared to noncooperative transmission systems. Additionally, it demonstrates the ability to minimize the serving cluster size for each user, thereby reducing the overall system complexity in comparison to the full-cooperative distributed mMIMO systems.
The pursuit of higher data rates and efficient spectrum utilization in modern communication technologies necessitates novel solutions. In order to provide insights into improving spectral efficiency and reducing latency, this study investigates the maximum channel coding rate (MCCR) of finite block length (FBL) multiple-input multiple-output (MIMO) faster-than-Nyquist (FTN) channels. By optimizing power allocation, we derive the system's MCCR expression. Simulation results are compared with the existing literature to reveal the benefits of FTN in FBL transmission.
This study investigates the integration of a high altitude platform station (HAPS), a non-terrestrial network (NTN) node, into the cell-switching paradigm for energy saving. By doing so, the sustainability and ubiquitous connectivity targets can be achieved. Besides, a delay-aware approach is also adopted, where the delay profiles of users are respected in such a way that we attempt to meet the latency requirements of users with a best-effort strategy. To this end, a novel, simple, and lightweight Q-learning algorithm is designed to address the cell-switching optimization problem. During the simulation campaigns, different interference scenarios and delay situations between base stations are examined in terms of energy consumption and quality-of-service (QoS), and the results confirm the efficacy of the proposed Q-learning algorithm.
Selection of hyperparameters in deep neural networks is a challenging problem due to the wide search space and emergence of various layers with specific hyperparameters. There exists an absence of consideration for the neural architecture selection of convolutional neural networks (CNNs) for spectrum sensing. Here, we develop a method using reinforcement learning and Q-learning to systematically search and evaluate various architectures for generated datasets including different signals and channels in the spectrum sensing problem. We show by extensive simulations that CNN-based detectors proposed by our developed method outperform several detectors in the literature. For the most complex dataset, the proposed approach provides 9% enhancement in accuracy at the cost of higher computational complexity. Furthermore, a novel method using multi-armed bandit model for selection of the sensing time is proposed to achieve higher throughput and accuracy while minimizing the consumed energy. The method dynamically adjusts the sensing time under the time-varying condition of the channel without prior information. We demonstrate through a simulated scenario that the proposed method improves the achieved reward by about 20% compared to the conventional policies. Consequently, this study effectively manages the selection of important hyperparameters for CNN-based detectors offering superior performance of cognitive radio network.
Waveform generation is essential for studying signal propagation and channel characteristics, particularly for objects that are conceptualized but still need to be operational. We introduce a comprehensive guide on creating synthetic signals using channel and delay coefficients derived from the Quasi-Deterministic Radio Channel Generator (QuaDRiGa), which is recognized as a 3GPP-3D and 3GPP 38.901 reference implementation. The effectiveness of the proposed synthetic waveform generation method is validated through accurate estimation of code delay and Doppler shift. This validation is achieved using both the parallel code phase search technique and the conventional tracking method applied to satellites. As the method of integrating channel and delay coefficients to create synthetic waveforms is the same for satellite, HAPS, and gNB PRS, validating this method on synthetic satellite signals could potentially be extended to HAPS and gNB PRS as well. This study could significantly contribute to the field of heterogeneous navigation systems.
Simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) is a cutting-edge concept for the sixth-generation (6G) wireless networks. In this letter, we propose a novel system that incorporates STAR-RIS with simultaneous wireless information and power transfer (SWIPT) using rate splitting multiple access (RSMA). The proposed system facilitates communication from a multi-antenna base station (BS) to single-antenna users in a downlink transmission. The BS concurrently sends energy and information signals to multiple energy harvesting receivers (EHRs) and information data receivers (IDRs) with the support of a deployed STAR-RIS. Furthermore, a multi-objective optimization is introduced to strike a balance between users' sum rate and the total harvested energy. To achieve this, an optimization problem is formulated to optimize the energy/information beamforming vectors at the BS, the phase shifts at the STAR-RIS, and the common message rate. Subsequently, we employ a meta deep deterministic policy gradient (Meta-DDPG) approach to solve the complex problem. Simulation results validate that the proposed algorithm significantly enhances both data rate and harvested energy in comparison to conventional DDPG.
The deployment of federated learning (FL) within vertical heterogeneous networks, such as those enabled by high-altitude platform station (HAPS), offers the opportunity to engage a wide array of clients, each endowed with distinct communication and computational capabilities. This diversity not only enhances the training accuracy of FL models but also hastens their convergence. Yet, applying FL in these expansive networks presents notable challenges, particularly the significant non-IIDness in client data distributions. Such data heterogeneity often results in slower convergence rates and reduced effectiveness in model training performance. Our study introduces a client selection strategy tailored to address this issue, leveraging user network traffic behaviour. This strategy involves the prediction and classification of clients based on their network usage patterns while prioritizing user privacy. By strategically selecting clients whose data exhibit similar patterns for participation in FL training, our approach fosters a more uniform and representative data distribution across the network. Our simulations demonstrate that this targeted client selection methodology significantly reduces the training loss of FL models in HAPS networks, thereby effectively tackling a crucial challenge in implementing large-scale FL systems.
Federated Learning (FL) is a decentralized machine learning (ML) technique that allows a number of participants to train an ML model collaboratively without having to share their private local datasets with others. When participants are unmanned aerial vehicles (UAVs), UAV-enabled FL would experience heterogeneity due to the majorly skewed (non-independent and identically distributed -IID) collected data. In addition, UAVs may demonstrate unintentional misbehavior in which the latter may fail to send updates to the FL server due, for instance, to UAVs' disconnectivity from the FL system caused by high mobility, unavailability, or battery depletion. Such challenges may significantly affect the convergence of the FL model. A recent way to tackle these challenges is client selection, based on customized criteria that consider UAV computing power and energy consumption. However, most existing client selection schemes neglected the participants' reliability. Indeed, FL can be targeted by poisoning attacks, in which malicious UAVs upload poisonous local models to the FL server, by either providing targeted false predictions for specifically chosen inputs or by compromising the global model's accuracy through tampering with the local model. Hence, we propose in this paper a novel client selection scheme that enhances convergence by prioritizing fast UAVs with high-reliability scores, while eliminating malicious UAVs from training. Through experiments, we assess the effectiveness of our scheme in resisting different attack scenarios, in terms of convergence and achieved model accuracy. Finally, we demonstrate the performance superiority of the proposed approach compared to baseline methods.
In free-space optical satellite networks (FSOSNs), satellites connected via laser inter-satellite links (LISLs), latency is a critical factor, especially for long-distance inter-continental connections. Since satellites depend on solar panels for power supply, power consumption is also a vital factor. We investigate the minimization of total network latency (i.e., the sum of the network latencies of all inter-continental connections in a time slot) in a realistic model of a FSOSN, the latest version of the Starlink Phase 1 Version 3 constellation. We develop mathematical formulations of the total network latency over different LISL ranges and different satellite transmission power constraints for multiple simultaneous inter-continental connections. We use practical system models for calculating network latency and satellite optical link transmission power, and we formulate the problem as a binary integer linear program. The results reveal that, for satellite transmission power limits set at 0.5 W, 0.3 W, and 0.1 W, the average total network latency for all five inter-continental connections studied in this work levels off at 339 ms, 361 ms, and 542 ms, respectively. Furthermore, the corresponding LISL ranges required to achieve these average total network latency values are 4500 km, 3000 km, and 1731 km, respectively. Different limitations on satellite transmission power exhibit varying effects on average total network latency (over 100 time slots), and they also induce differing changes in the corresponding LISL ranges. In the absence of satellite transmission power constraints, as the LISL range extends from the minimum feasible range of 1575 km to the maximum feasible range of 5016 km, the average total network latency decreases from 589 ms to 311 ms.
In free-space optical satellite networks (FSOSNs), satellites can have different laser inter-satellite link (LISL) ranges for connectivity. Greater LISL ranges can reduce network latency of the path but can also result in an increase in transmission power for satellites on the path. Consequently, this tradeoff between satellite transmission power and network latency should be investigated, and in this work we examine it in FSOSNs drawing on the Starlink Phase 1 Version 3 and Kuiper Shell 2 constellations for different LISL ranges and different inter-continental connections. We use appropriate system models for calculating the average satellite transmission power and network latency. The results show that the mean network latency decreases and mean average satellite transmission power increases with an increase in LISL range. For the Toronto--Sydney inter-continental connection in an FSOSN with Starlink's Phase 1 Version 3 constellation, when the LISL range is approximately 2,900 km, the mean network latency and mean average satellite transmission power intersect are approximately 135 ms and 380 mW, respectively. For an FSOSN with the Kuiper Shell 2 constellation in this inter-continental connection, this LISL range is around 3,800 km, and the two parameters are approximately 120 ms and 700 mW, respectively. For the Toronto--Istanbul and Toronto--London inter-continental connections, the LISL ranges at the intersection are different and vary from 2,600 km to 3,400 km. Furthermore, we analyze outage probability performance of optical uplink/downlink due to atmosphere attenuation and turbulence.