Abstract:In this paper, we examine the encoding and decoding of transmitted sequences for downlink time-offset with faster than Nyquist signaling NOMA (T-NOMA). As a baseline, we use the singular value decomposition (SVD)-based scheme proposed in previous studies for encoding and decoding. Even though this SVD-based scheme provides reliable communication, its time complexity increases quadratically with the sequence length. We propose a convolutional neural network (CNN) auto-encoder (AE) for encoding and decoding with linear time complexity. We explain the design of the encoder and decoder architectures and the training criteria. By examining several variants of the CNN AE, we show that it can achieve an excellent trade-off between performance and complexity. A proposed CNN AE outperforms the SVD method using a lower implementation complexity by approximately 2 dB in a T-NOMA system with two users assuming no timing offset errors or channel state information estimation errors. In the presence of channel state information (CSI) error variance of 1$\%$ and uniform timing error at $\pm$4\% of the symbol interval, the proposed CNN AE provides up to 10 dB SNR gain over the SVD method. We also propose a novel modified training objective function consisting of a linear combination of the traditionally used cross-entropy (CE) loss function and a closed-form expression for the bit error rate (BER) called the Q-loss function. Simulations show that the modified loss function achieves SNR gains of up to 1 dB over the CE loss function alone. Finally, we investigate several novel CNN architectures for both the encoder and decoder components of the AE that employ additional linear feed-forward connections between the CNN stages; experiments show that these architectural innovations achieve additional SNR gains of up to 2.2 dB over the standard serial CNN AE architecture.