This paper investigates full-duplex (FD) multi-user multiple-input multiple-output (MU-MIMO) system design with coarse quantization. We first analyze the impact of self-interference (SI) on quantization in FD single-input single-output systems. The analysis elucidates that the minimum required number of analog-to-digital converter (ADC) bits is logarithmically proportional to the ratio of total received power to the received power of desired signals. Motivated by this, we design a FD MIMO beamforming method that effectively manages the SI. Dividing a spectral efficiency maximization beamforming problem into two sub-problems for alternating optimization, we address the first by optimizing the precoder: obtaining a generalized eigenvalue problem from the first-order optimality condition, where the principal eigenvector is the optimal stationary solution, and adopting a power iteration method to identify this eigenvector. Subsequently, a quantization-aware minimum mean square error combiner is computed for the derived precoder. Through numerical studies, we observe that the proposed beamformer reduces the minimum required number of ADC bits for achieving higher spectral efficiency than that of half-duplex (HD) systems, compared to FD benchmarks. The overall analysis shows that, unlike with quantized HD systems, more than 6 bits are required for the ADC to fully realize the potential of the quantized FD system.
Nonlinear self-interference cancellation (SIC) is essential for full-duplex communication systems, which can offer twice the spectral efficiency of traditional half-duplex systems. The challenge of nonlinear SIC is similar to the classic problem of system identification in adaptive filter theory, whose crux lies in identifying the optimal nonlinear basis functions for a nonlinear system. This becomes especially difficult when the system input has a non-stationary distribution. In this paper, we propose a novel algorithm for nonlinear digital SIC that adaptively constructs orthonormal polynomial basis functions according to the non-stationary moments of the transmit signal. By combining these basis functions with the least mean squares (LMS) algorithm, we introduce a new SIC technique, called as the adaptive orthonormal polynomial LMS (AOP-LMS) algorithm. To reduce computational complexity for practical systems, we augment our approach with a precomputed look-up table, which maps a given modulation and coding scheme to its corresponding basis functions. Numerical simulation indicates that our proposed method surpasses existing state-of-the-art SIC algorithms in terms of convergence speed and mean squared error when the transmit signal is non-stationary, such as with adaptive modulation and coding. Experimental evaluation with a wireless testbed confirms that our proposed approach outperforms existing digital SIC algorithms.
Integrated sensing and communication (ISAC) is widely recognized as a fundamental enabler for future wireless communications. In this paper, we present a joint communication and radar beamforming framework for maximizing a sum spectral efficiency (SE) while guaranteeing desired radar performance with imperfect channel state information (CSI) in multi-user and multi-target ISAC systems. To this end, we adopt either a radar transmit beam mean square error (MSE) or receive signal-to-clutter-plus-noise ratio (SCNR) as a radar performance constraint of a sum SE maximization problem. To resolve inherent challenges such as non-convexity and imperfect CSI, we reformulate the problems and identify first-order optimality conditions for the joint radar and communication beamformer. Turning the condition to a nonlinear eigenvalue problem with eigenvector dependency (NEPv), we develop an alternating method which finds the joint beamformer through power iteration and a Lagrangian multiplier through binary search. The proposed framework encompasses both the radar metrics and is robust to channel estimation error with low complexity. Simulations validate the proposed methods. In particular, we observe that the MSE and SCNR constraints exhibit complementary performance depending on the operating environment, which manifests the importance of the proposed comprehensive and robust optimization framework.
Full-duplex communication systems have the potential to achieve significantly higher data rates and lower latency compared to their half-duplex counterparts. This advantage stems from their ability to transmit and receive data simultaneously. However, to enable successful full-duplex operation, the primary challenge lies in accurately eliminating strong self-interference (SI). Overcoming this challenge involves addressing various issues, including the nonlinearity of power amplifiers, the time-varying nature of the SI channel, and the non-stationary transmit data distribution. In this article, we present a review of recent advancements in digital self-interference cancellation (SIC) algorithms. Our focus is on comparing the effectiveness of adaptable model-based SIC methods with their model-free counterparts that leverage data-driven machine learning techniques. Through our comparison study under practical scenarios, we demonstrate that the model-based SIC approach offers a more robust solution to the time-varying SI channel and the non-stationary transmission, achieving optimal SIC performance in terms of the convergence rate while maintaining low computational complexity. To validate our findings, we conduct experiments using a software-defined radio testbed that conforms to the IEEE 802.11a standards. The experimental results demonstrate the robustness of the model-based SIC methods, providing practical evidence of their effectiveness.
With the growing interest in satellite networks, satellite-terrestrial integrated networks (STINs) have gained significant attention because of their potential benefits. However, due to the lack of a tractable network model for the STIN architecture, analytical studies allowing one to investigate the performance of such networks are not yet available. In this work, we propose a unified network model that jointly captures satellite and terrestrial networks into one analytical framework. Our key idea is based on Poisson point processes distributed on concentric spheres, assigning a random height to each point as a mark. This allows one to consider each point as a source of desired signal or a source of interference while ensuring visibility to the typical user. Thanks to this model, we derive the probability of coverage of STINs as a function of major system parameters, chiefly path-loss exponent, satellites and terrestrial base stations' height distributions and density, transmit power and biasing factors. Leveraging the analysis, we concretely explore two benefits that STINs provide: i) coverage extension in remote rural areas and ii) data offloading in dense urban areas.
In the upcoming 6G era, multiple access (MA) will play an essential role in achieving high throughput performances required in a wide range of wireless applications. Since MA and interference management are closely related issues, the conventional MA techniques are limited in that they cannot provide near-optimal performance in universal interference regimes. Recently, rate-splitting multiple access (RSMA) has been gaining much attention. RSMA splits an individual message into two parts: a common part, decodable by every user, and a private part, decodable only by the intended user. Each user first decodes the common message and then decodes its private message by applying successive interference cancellation (SIC). By doing so, RSMA not only embraces the existing MA techniques as special cases but also provides significant performance gains by efficiently mitigating inter-user interference in a broad range of interference regimes. In this article, we first present the theoretical foundation of RSMA. Subsequently, we put forth four key benefits of RSMA: spectral efficiency, robustness, scalability, and flexibility. Upon this, we describe how RSMA can enable ten promising scenarios and applications along with future research directions to pave the way for 6G.
In this paper, we propose a learning-based detection framework for uplink massive multiple-input and multiple-output (MIMO) systems with one-bit analog-to-digital converters. The learning-based detection only requires counting the occurrences of the quantized outputs of -1 and +1 for estimating a likelihood probability at each antenna. Accordingly, the key advantage of this approach is to perform maximum likelihood detection without explicit channel estimation which has been one of the primary challenges of one-bit quantized systems. The learning in the high signal-to-noise ratio (SNR) regime, however, needs excessive training to estimate the extremely small likelihood probabilities. To address this drawback, we propose a dither-and-learning technique to estimate likelihood functions from dithered signals. First, we add a dithering signal to artificially decrease the SNR and then infer the likelihood function from the quantized dithered signals by using an SNR estimate derived from a deep neural network-based offline estimator. We extend our technique by developing an adaptive dither-and-learning method that updates the dithering power according the patterns observed in the quantized dithered signals. The proposed framework is also applied to state-of-the-art channel-coded MIMO systems by computing a bit-wise and user-wise log-likelihood ratio from the refined likelihood probabilities. Simulation results validate the detection performance of the proposed methods in both uncoded and coded systems.
We propose an adaptive learning-based framework for uplink massive multiple-input multiple-output (MIMO) systems with one-bit analog-to-digital converters. Learning-based detection does not need to estimate channels, which overcomes a key drawback in one-bit quantized systems. During training, learning-based detection suffers at high signal-to-noise ratio (SNR) because observations will be biased to +1 or -1 which leads to many zero-valued empirical likelihood functions. At low SNR, observations vary frequently in value but the high noise power makes capturing the effect of the channel difficult. To address these drawbacks, we propose an adaptive dithering-and-learning method. During training, received values are mixed with dithering noise whose statistics are known to the base station, and the dithering noise power is updated for each antenna element depending on the observed pattern of the output. We then use the refined probabilities in the one-bit maximum likelihood detection rule. Simulation results validate the detection performance of the proposed method vs. our previous method using fixed dithering noise power as well as zero-forcing and optimal ML detection both of which assume perfect channel knowledge.
In this paper, we investigate multicell-coordinated beamforming for large-scale multiple-input multipleoutput (MIMO) orthogonal frequency-division multiplexing (OFDM) communications with low-resolution data converters. In particular, we seek to minimize the total transmit power of the network under received signal-to-quantization-plus-interference-and-noise ratio constraints while minimizing per-antenna transmit power. Our primary contributions are (1) formulating the quantized downlink (DL) OFDM antenna power minimax problem and deriving its associated dual problem, (2) showing strong duality and interpreting the dual as a virtual quantized uplink (UL) OFDM problem, and (3) developing an iterative minimax algorithm to identify a feasible solution based on the dual problem with performance validation through simulations. Specifically, the dual problem requires joint optimization of virtual UL transmit power and noise covariance matrices. To solve the problem, we first derive the optimal dual solution of the UL problem for given noise covariance matrices. Then, we use the solution to compute the associated DL beamformer. Subsequently, using the DL beamformer we update the UL noise covariance matrices via subgradient projection. Finally, we propose an iterative algorithm by repeating the steps for optimizing DL beamformers. Simulations validate the proposed algorithm in terms of the maximum antenna transmit power and peak-to-average-power ratio.
To realize ultra-reliable low latency communications with high spectral efficiency and security, we investigate a joint optimization problem for downlink communications with multiple users and eavesdroppers in the finite blocklength (FBL) regime. We formulate a multi-objective optimization problem to maximize a sum secrecy rate by developing a secure precoder and to minimize a maximum error probability and information leakage rate. The main challenges arise from the complicated multi-objective problem, non-tractable back-off factors from the FBL assumption, non-convexity and non-smoothness of the secrecy rate, and the intertwined optimization variables. To address these challenges, we adopt an alternating optimization approach by decomposing the problem into two phases: secure precoding design, and maximum error probability and information leakage rate minimization. In the first phase, we obtain a lower bound of the secrecy rate and derive a first-order Karush-Kuhn-Tucker (KKT) condition to identify local optimal solutions with respect to the precoders. Interpreting the condition as a generalized eigenvalue problem, we solve the problem by using a power iteration-based method. In the second phase, we adopt a weighted-sum approach and derive KKT conditions in terms of the error probabilities and leakage rates for given precoders. Simulations validate the proposed algorithm.