Integrated satellite-terrestrial networks (ISTNs) can significantly expand network coverage while diminishing reliance on terrestrial infrastructure. Despite the enticing potential of ISTNs, there is no comprehensive mathematical performance analysis framework for these emerging networks. In this paper, we introduce a tractable approach to analyze the downlink coverage performance of multi-tier ISTNs, where each network tier operates with orthogonal frequency bands. The proposed approach is to model the spatial distribution of cellular and satellite base stations using homogeneous Poisson point processes arranged on concentric spheres with varying radii. Central to our analysis is a displacement principle that transforms base station locations on different spheres into projected rings while preserving the distance distribution to the typical user. By incorporating the effects of Shadowed-Rician fading on satellite channels and employing orthogonal frequency bands, we derive analytical expressions for coverage in the integrated networks while keeping full generality. Our primary discovery is that network performance reaches its maximum when selecting the optimal density ratio of users associated with the network according to the density and the channel parameters of each network. Through simulations, we validate the precision of our derived expressions.
Integrated sensing and communication (ISAC) is widely recognized as a fundamental enabler for future wireless communications. In this paper, we present a joint communication and radar beamforming framework for maximizing a sum spectral efficiency (SE) while guaranteeing desired radar performance with imperfect channel state information (CSI) in multi-user and multi-target ISAC systems. To this end, we adopt either a radar transmit beam mean square error (MSE) or receive signal-to-clutter-plus-noise ratio (SCNR) as a radar performance constraint of a sum SE maximization problem. To resolve inherent challenges such as non-convexity and imperfect CSI, we reformulate the problems and identify first-order optimality conditions for the joint radar and communication beamformer. Turning the condition to a nonlinear eigenvalue problem with eigenvector dependency (NEPv), we develop an alternating method which finds the joint beamformer through power iteration and a Lagrangian multiplier through binary search. The proposed framework encompasses both the radar metrics and is robust to channel estimation error with low complexity. Simulations validate the proposed methods. In particular, we observe that the MSE and SCNR constraints exhibit complementary performance depending on the operating environment, which manifests the importance of the proposed comprehensive and robust optimization framework.
In this paper, we investigate the coverage performance of downlink satellite networks employing dynamic coordinated beamforming. Our approach involves modeling the spatial arrangement of satellites and users using Poisson point processes situated on concentric spheres. We derive analytical expressions for the coverage probability, which take into account the in-cluster geometry of the coordinated satellite set. These expressions are formulated in terms of various parameters, including the number of antennas per satellite, satellite density, fading characteristics, and path-loss exponent. To offer a more intuitive understanding, we also develop an approximation for the coverage probability. Furthermore, by considering the distribution of normalized distances, we derive the spatially averaged coverage probability, thereby validating the advantages of coordinated beamforming from a spatial average perspective. Our primary finding is that dynamic coordinated beamforming significantly improves coverage compared to the absence of satellite coordination, in direct proportion to the number of antennas on each satellite. Moreover, we observe that the optimal cluster size, which maximizes the ergodic spectral efficiency, increases with higher satellite density, provided that the number of antennas on the satellites is sufficiently large. Our findings are corroborated by simulation results, confirming the accuracy of the derived expressions.
Full-duplex communication systems have the potential to achieve significantly higher data rates and lower latency compared to their half-duplex counterparts. This advantage stems from their ability to transmit and receive data simultaneously. However, to enable successful full-duplex operation, the primary challenge lies in accurately eliminating strong self-interference (SI). Overcoming this challenge involves addressing various issues, including the nonlinearity of power amplifiers, the time-varying nature of the SI channel, and the non-stationary transmit data distribution. In this article, we present a review of recent advancements in digital self-interference cancellation (SIC) algorithms. Our focus is on comparing the effectiveness of adaptable model-based SIC methods with their model-free counterparts that leverage data-driven machine learning techniques. Through our comparison study under practical scenarios, we demonstrate that the model-based SIC approach offers a more robust solution to the time-varying SI channel and the non-stationary transmission, achieving optimal SIC performance in terms of the convergence rate while maintaining low computational complexity. To validate our findings, we conduct experiments using a software-defined radio testbed that conforms to the IEEE 802.11a standards. The experimental results demonstrate the robustness of the model-based SIC methods, providing practical evidence of their effectiveness.
With the growing interest in satellite networks, satellite-terrestrial integrated networks (STINs) have gained significant attention because of their potential benefits. However, due to the lack of a tractable network model for the STIN architecture, analytical studies allowing one to investigate the performance of such networks are not yet available. In this work, we propose a unified network model that jointly captures satellite and terrestrial networks into one analytical framework. Our key idea is based on Poisson point processes distributed on concentric spheres, assigning a random height to each point as a mark. This allows one to consider each point as a source of desired signal or a source of interference while ensuring visibility to the typical user. Thanks to this model, we derive the probability of coverage of STINs as a function of major system parameters, chiefly path-loss exponent, satellites and terrestrial base stations' height distributions and density, transmit power and biasing factors. Leveraging the analysis, we concretely explore two benefits that STINs provide: i) coverage extension in remote rural areas and ii) data offloading in dense urban areas.
In the upcoming 6G era, multiple access (MA) will play an essential role in achieving high throughput performances required in a wide range of wireless applications. Since MA and interference management are closely related issues, the conventional MA techniques are limited in that they cannot provide near-optimal performance in universal interference regimes. Recently, rate-splitting multiple access (RSMA) has been gaining much attention. RSMA splits an individual message into two parts: a common part, decodable by every user, and a private part, decodable only by the intended user. Each user first decodes the common message and then decodes its private message by applying successive interference cancellation (SIC). By doing so, RSMA not only embraces the existing MA techniques as special cases but also provides significant performance gains by efficiently mitigating inter-user interference in a broad range of interference regimes. In this article, we first present the theoretical foundation of RSMA. Subsequently, we put forth four key benefits of RSMA: spectral efficiency, robustness, scalability, and flexibility. Upon this, we describe how RSMA can enable ten promising scenarios and applications along with future research directions to pave the way for 6G.
Transmitter channel state information (CSIT) is indispensable for the spectral efficiency gains offered by massive multiple-input multiple-output (MIMO) systems. In a frequency-division-duplexing (FDD) massive MIMO system, CSIT is typically acquired through downlink channel estimation and user feedback, but as the number of antennas increases, the overhead for CSI training and feedback per user grows, leading to a decrease in spectral efficiency. In this paper, we show that, using uplink pilots in FDD, the downlink sum spectral efficiency gain with perfect downlink CSIT is achievable when the number of antennas at a base station is infinite under some mild channel conditions. The key idea showing our result is the mean squared error-optimal downlink channel reconstruction method using uplink pilots, which exploits the geometry reciprocity of uplink and downlink channels. We also present a robust downlink precoding method harnessing the reconstructed channel with the error covariance matrix. Our system-level simulations show that our proposed precoding method can attain comparable sum spectral efficiency to zero-forcing precoding with perfect downlink CSIT, without CSI training and feedback.
Simultaneous localization and mapping (SLAM) is a method that constructs a map of an unknown environment and localizes the position of a moving agent on the map simultaneously. Extended Kalman filter (EKF) has been widely adopted as a low complexity solution for online SLAM, which relies on a motion and measurement model of the moving agent. In practice, however, acquiring precise information about these models is very challenging, and the model mismatch effect causes severe performance loss in SLAM. In this paper, inspired by the recently proposed KalmanNet, we present a robust EKF algorithm using the power of deep learning for online SLAM, referred to as Split-KalmanNet. The key idea of Split-KalmanNet is to compute the Kalman gain using the Jacobian matrix of a measurement function and two recurrent neural networks (RNNs). The two RNNs independently learn the covariance matrices for a prior state estimate and the innovation from data. The proposed split structure in the computation of the Kalman gain allows to compensate for state and measurement model mismatch effects independently. Numerical simulation results verify that Split-KalmanNet outperforms the traditional EKF and the state-of-the-art KalmanNet algorithm in various model mismatch scenarios.
To realize ultra-reliable low latency communications with high spectral efficiency and security, we investigate a joint optimization problem for downlink communications with multiple users and eavesdroppers in the finite blocklength (FBL) regime. We formulate a multi-objective optimization problem to maximize a sum secrecy rate by developing a secure precoder and to minimize a maximum error probability and information leakage rate. The main challenges arise from the complicated multi-objective problem, non-tractable back-off factors from the FBL assumption, non-convexity and non-smoothness of the secrecy rate, and the intertwined optimization variables. To address these challenges, we adopt an alternating optimization approach by decomposing the problem into two phases: secure precoding design, and maximum error probability and information leakage rate minimization. In the first phase, we obtain a lower bound of the secrecy rate and derive a first-order Karush-Kuhn-Tucker (KKT) condition to identify local optimal solutions with respect to the precoders. Interpreting the condition as a generalized eigenvalue problem, we solve the problem by using a power iteration-based method. In the second phase, we adopt a weighted-sum approach and derive KKT conditions in terms of the error probabilities and leakage rates for given precoders. Simulations validate the proposed algorithm.
This paper investigates the sum spectral efficiency maximization problem in downlink multiuser multiple-input multiple-output (MIMO) systems with low-resolution quantizers at an access point (AP) and users. In particular, we consider rate-splitting multiple access (RSMA) to enhance spectral efficiency by offering opportunities to boost achievable degrees of freedom. Optimizing RSMA precoders, however, is highly challenging due to the minimum rate constraint when determining the rate of the common stream. The quantization errors coupled with the precoders further make the problem more complicated and difficult to solve. In this paper, we develop a novel RSMA precoding algorithm incorporating quantization errors for maximizing the sum spectral efficiency. To this end, we first obtain an approximate spectral efficiency in a smooth function. Subsequently, we derive the first-order optimality condition in the form of the nonlinear eigenvalue problem (NEP). We propose a computationally efficient algorithm to find the principal eigenvector of the NEP as a sub-optimal solution. Simulation results validate the superior spectral efficiency of the proposed method. The key benefit of using RSMA over spatial division multiple access (SDMA) comes from the ability of the common stream to balance between the channel gain and quantization error in multiuser MIMO systems with different quantization resolutions.