Abstract:Recent advancements have introduced federated machine learning-based channel state information (CSI) compression before the user equipments (UEs) upload the downlink CSI to the base transceiver station (BTS). However, most existing algorithms impose a high communication overhead due to frequent parameter exchanges between UEs and BTS. In this work, we propose a model splitting approach with a shared model at the BTS and multiple local models at the UEs to reduce communication overhead. Moreover, we implant a pipeline module at the BTS to reduce training time. By limiting exchanges of boundary parameters during forward and backward passes, our algorithm can significantly reduce the exchanged parameters over the benchmarks during federated CSI feedback training.
Abstract:The integrated sensing and communication (ISAC) technique is regarded as a key component in future vehicular applications. In this paper, we propose an ISAC solution that integrates Long Range (LoRa) modulation with frequency-modulated continuous wave (FMCW) radar in the millimeter-wave (mmWave) band, called mmWave-LoRadar. This design introduces the sensing capabilities to the LoRa communication with a simplified hardware architecture. Particularly, we uncover the dual discontinuity issues in time and phase of the mmWave-LoRadar received signals, rendering conventional signal processing techniques ineffective. As a remedy, we propose a corresponding hardware design and signal processing schemes under the compressed sampling framework. These techniques effectively cope with the dual discontinuity issues and mitigate the demands for high-sampling-rate analog-to-digital converters while achieving good performance. Simulation results demonstrate the superiority of the mmWave-LoRadar ISAC system in vehicular communication and sensing networks.
Abstract:Semantic communications have emerged as a crucial research direction for future wireless communication networks. However, as wireless systems become increasingly complex, the demands for computation and communication resources in semantic communications continue to grow rapidly. This paper investigates the trade-off between computation and communication in wireless semantic communications, taking into consideration transmission task delay and performance constraints within the semantic communication framework. We propose a novel tradeoff metric to analyze the balance between computation and communication in semantic transmissions and employ the deep reinforcement learning (DRL) algorithm to minimize this metric, thereby reducing the cost associated with balancing computation and communication. Through simulations, we analyze the tradeoff between computation and communication and demonstrate the effectiveness of optimizing this trade-off metric.
Abstract:In this paper, we propose a novel active reconfigurable intelligent surface (RIS)-assisted amplitude-domain reflection modulation (ADRM) transmission scheme, termed as ARIS-ADRM. This innovative approach leverages the additional degree of freedom (DoF) provided by the amplitude domain of the active RIS to perform index modulation (IM), thereby enhancing spectral efficiency (SE) without increasing the costs associated with additional radio frequency (RF) chains. Specifically, the ARIS-ADRM scheme transmits information bits through both the modulation symbol and the index of active RIS amplitude allocation patterns (AAPs). To evaluate the performance of the proposed ARIS-ADRM scheme, we provide an achievable rate analysis and derive a closed-form expression for the upper bound on the average bit error probability (ABEP). Furthermore, we formulate an optimization problem to construct the AAP codebook, aiming to minimize the ABEP. Simulation results demonstrate that the proposed scheme significantly improves error performance under the same SE conditions compared to its benchmarks. This improvement is due to its ability to flexibly adapt the transmission rate by fully exploiting the amplitude domain DoF provided by the active RIS.
Abstract:Affine frequency division multiplexing (AFDM) is a promising chirp-assisted multicarrier waveform for future high-mobility communications. This paper is devoted to enhanced receiver design for multiple input and multiple output AFDM (MIMO-AFDM) systems. Firstly, we introduce a unified variational inference (VI) approach to approximate the target posterior distribution, under which the belief propagation (BP) and expectation propagation (EP)-based algorithms are derived. As both VI-based detection and low-density parity-check (LDPC) decoding can be expressed by bipartite graphs in MIMO-AFDM systems, we construct a joint sparse graph (JSG) by merging the graphs of these two for low-complexity receiver design. Then, based on this graph model, we present the detailed message propagation of the proposed JSG. Additionally, we propose an enhanced JSG (E-JSG) receiver based on the linear constellation encoding model. The proposed E-JSG eliminates the need for interleavers, de-interleavers, and log-likelihood ratio transformations, thus leading to concurrent detection and decoding over the integrated sparse graph. To further reduce detection complexity, we introduce a sparse channel method by approaximating multiple graph edges with insignificant channel coefficients into a single edge on the VI graph. Simulation results show the superiority of the proposed receivers in terms of computational complexity, detection and decoding latency, and error rate performance compared to the conventional ones.
Abstract:Sparse code multiple access (SCMA) and multiple input multiple output (MIMO) are considered as two efficient techniques to provide both massive connectivity and high spectrum efficiency for future machine-type wireless networks. This paper proposes a single sparse graph (SSG) enhanced expectation propagation algorithm (EPA) receiver, referred to as SSG-EPA, for uplink MIMO-SCMA systems. Firstly, we reformulate the sparse codebook mapping process using a linear encoding model, which transforms the variable nodes (VNs) of SCMA from symbol-level to bit-level VNs. Such transformation facilitates the integration of the VNs of SCMA and low-density parity-check (LDPC), thereby emerging the SCMA and LDPC graphs into a SSG. Subsequently, to further reduce the detection complexity, the message propagation between SCMA VNs and function nodes (FNs) are designed based on EPA principles. Different from the existing iterative detection and decoding (IDD) structure, the proposed EPA-SSG allows a simultaneously detection and decoding at each iteration, and eliminates the use of interleavers, de-interleavers, symbol-to-bit, and bit-to-symbol LLR transformations. Simulation results show that the proposed SSG-EPA achieves better error rate performance compared to the state-of-the-art schemes.
Abstract:Next-generation wireless networks are conceived to provide reliable and high-data-rate communication services for diverse scenarios, such as vehicle-to-vehicle, unmanned aerial vehicles, and satellite networks. The severe Doppler spreads in the underlying time-varying channels induce destructive inter-carrier interference (ICI) in the extensively adopted orthogonal frequency division multiplexing (OFDM) waveform, leading to severe performance degradation. This calls for a new air interface design that can accommodate the severe delay-Doppler spreads in highly dynamic channels while possessing sufficient flexibility to cater to various applications. This article provides a comprehensive overview of a promising chirp-based waveform named affine frequency division multiplexing (AFDM). It is featured with two tunable parameters and achieves optimal diversity order in doubly dispersive channels (DDC). We study the fundamental principle of AFDM, illustrating its intrinsic suitability for DDC. Based on that, several potential applications of AFDM are explored. Furthermore, the major challenges and the corresponding solutions of AFDM are presented, followed by several future research directions. Finally, we draw some instructive conclusions about AFDM, hoping to provide useful inspiration for its development.
Abstract:This paper investigates uplink transmission from a single-antenna mobile phone to a cluster of satellites, emphasizing the role of inter-satellite links (ISLs) in facilitating cooperative signal detection. The study focuses on non-ideal ISLs, examining both terahertz (THz) and free-space optical (FSO) ISLs concerning their ergodic capacity. We present a practical scenario derived from the recent 3GPP standard, specifying the frequency band, bandwidth, user and satellite antenna gains, power levels, and channel characteristics in alignment with the latest 3GPP for non-terrestrial networks (NTN). Additionally, we propose a satellite selection method to identify the optimal satellite as the master node (MN), responsible for signal processing. This method takes into account both the user-satellite link and ISL channels. For the THz ISL analysis, we derive a closed-form approximation for ergodic capacity under two scenarios: one with instantaneous channel state information (CSI) and another with only statistical CSI shared between satellites. For the FSO ISL analysis, we present a closed-form approximate upper bound for ergodic capacity, accounting for the impact of pointing error loss. Furthermore, we evaluate the effects of different ISL frequencies and pointing errors on spectral efficiency. Simulation results demonstrate that multi-satellite multiple-input multiple-output (MIMO) satellite communication (SatCom) significantly outperforms single-satellite SatCom in terms of spectral efficiency. Additionally, our approximated upper bound for ergodic capacity closely aligns with results obtained from Monte Carlo simulations.
Abstract:Advances in wireless technology have significantly increased the number of wireless connections, leading to higher energy consumption in networks. Among these, base stations (BSs) in radio access networks (RANs) account for over half of the total energy usage. To address this, we propose a multi-cell sleep strategy combined with adaptive cell zooming, user association, and reconfigurable intelligent surface (RIS) to minimize BS energy consumption. This approach allows BSs to enter sleep during low traffic, while adaptive cell zooming and user association dynamically adjust coverage to balance traffic load and enhance data rates through RIS, minimizing the number of active BSs. However, it is important to note that the proposed method may achieve energy-savings at the cost of increased delay, requiring a trade-off between these two factors. Moreover, minimizing BS energy consumption under the delay constraint is a complicated non-convex problem. To address this issue, we model the RIS-aided multi-cell network as a Markov decision process (MDP) and use the proximal policy optimization (PPO) algorithm to optimize sleep mode (SM), cell zooming, and user association. Besides, we utilize a double cascade correlation network (DCCN) algorithm to optimize the RIS reflection coefficients. Simulation results demonstrate that PPO balances energy-savings and delay, while DCCN-optimized RIS enhances BS energy-savings. Compared to systems optimised by the benchmark DQN algorithm, energy consumption is reduced by 49.61%
Abstract:This paper proposes a graph neural network (GNN)-based space multiple-input multiple-output (MIMO) framework, named GSM, for direct-to-cell communications, aiming to achieve distributed coordinated beamforming for low Earth orbit (LEO) satellites. Firstly, a system model for LEO multi-satellite communications is established, where multiple LEO satellites collaborate to perform distributed beamforming and communicate with terrestrial user terminals coherently. Based on the system model, a weighted sum rate maximization problem is formulated. Secondly, a GNN-based method is developed to address the optimization problem. Particularly, the adopted neural network is composed of multiple identical GNNs, which are trained together and then deployed individually on each LEO satellite. Finally, the trained GNN is quantized and deployed on a field-programmable gate array (FPGA) to accelerate the inference by customizing the microarchitecture. Simulation results demonstrate that the proposed GNN scheme outperforms the benchmark ones including maximum ratio transmission, zero forcing and minimum mean square error. Furthermore, experimental results show that the FPGA-based accelerator achieves remarkably low inference latency, ranging from 3.863 to 5.883 ms under a 10-ns target clock period with 8-bit fixed-point data representation.