Abstract:Future 6G networks are envisioned to enhance the user experience in a multitude of different ways. The unification of existing terrestrial networks with non-terrestrial network (NTN) components will provide users with ubiquitous connectivity. Multi-access edge computing (MEC) will enable low-latency services, with computations performed closer to the end users, and distributed learning paradigms. Advanced multiple access schemes, such as sparse code multiple access (SCMA), can be employed to efficiently move data from edge nodes to spaceborne MEC servers. However, the non-orthogonal nature of SCMA results in interference, limiting the effectiveness of traditional SCMA receivers. Hence, NTN links should be protected with robust channel codes, significantly reducing the uplink throughput. Thus, we investigate the application of artificial intelligence (AI) to SCMA receivers for 6G NTNs. We train an AI model with multi-task learning to optimally separate and receive superimposed SCMA signals. Through link level simulations, we evaluate the block error rate (BLER) and the aggregated theoretical throughput achieved by the AI model as a function of the received energy per bit over noise power spectral density ratio (Eb/N0). We show that the proposed receiver achieves a target 10% BLER with 3.5dB lower Eb/N0 with respect to the benchmark algorithm. We conclude the assessment discussing the complexity-related challenges to the implementation of the AI model on board of a low earth orbit satellite.
Abstract:Wireless communications are typically subject to complex channel dynamics, requiring the transmission of pilot sequences to estimate and equalize such effects and correctly receive information bits. This is especially true in 6G non-terrestrial networks (NTNs) in low Earth orbit, where one end of the communication link orbits around the Earth at several kilometers per second, and a multi-carrier waveform, such as orthogonal frequency division multiplexing (OFDM), is employed. To minimize the pilot overhead, we remove pilot symbols every other OFDM slot and propose a channel predictor to obtain the channel frequency response (CFR) matrix in absence of pilots. The algorithm employs an encoder-decoder convolutional neural network and a long short-term memory layer, along with skip connections, to predict the CFR matrix on the upcoming slot based on the current one. We demonstrate the effectiveness of the proposed predictor through numerical simulations in tapped delay line channel models, highlighting the effective throughput improvement. We further assess the generalization capabilities of the model, showing minimal throughput degradation when testing under different Doppler spreads and in both line of sight (LoS) and non-LoS propagation conditions. Finally, we discuss computational-complexity-related aspects of the lightweight hybrid CNN-LSTM architecture.
Abstract:Faster-than-Nyquist (FTN) signaling aims at improving the spectral efficiency of wireless communication systems by exceeding the boundaries set by the Nyquist-Shannon sampling theorem. 50 years after its first introduction in the scientific literature, wireless communications have significantly changed, but spectral efficiency remains one of the key challenges. To adopt FTN signaling, inter-symbol interference (ISI) patterns need to be equalized at the receiver. Motivated by the pattern recognition capabilities of convolutional neural networks with skip connections, we propose such deep learning architecture for ISI equalization and symbol demodulation in FTN receivers. We investigate the performance of the proposed model considering quadrature phase shift keying modulation and low density parity check coding, and compare it to a set of benchmarks, including frequency-domain equalization, a quadratic-programming-based receiver, and an equalization scheme based on a deep neural network. We show that our receiver outperforms any benchmark, achieving error rates comparable to those in additive white Gaussian noise channel, and higher effective throughput, thanks to the increased spectral efficiency of FTN signaling. With a compression factor of 60% and code rate 3/4, the proposed model achieves a peak effective throughput of 2.5 Mbps at just 10dB of energy per bit over noise power spectral density ratio, with other receivers being limited by error floors due to the strong inter-symbol interference. To promote reproducibility in deep learning for wireless communications, our code is open source at the repository provided in the references.