Interdisciplinary Centre for Security, Reliability and Trust
Abstract:This paper investigates new efficient transmission architectures for multi-satellite massive multiple-input multiple-output (MIMO). We study the weighted sum-rate maximization problem in a multi-satellite system where multiple satellites transmit independent data streams to multi-antenna user terminals, thereby achieving higher throughput. We first adopt a multi-satellite weighted minimum mean square error (WMMSE) formulation under statistical channel state information (CSI), which yields closed-form updates for the precoding and receive vectors. To overcome the high complexity of optimization, we propose a learning-based WMMSE design that integrates tensor equivariance with closed-form recovery, enabling inference with near-optimal performance without iterative updates. Moreover, to reduce inter-satellite signaling overhead incurred by exchanging CSI and precoding vectors in centralized coordination, we develop a decentralized multi-satellite transmission scheme in which each satellite locally infers its precoders rather than receiving from the central satellite. The proposed decentralized scheme leverages periodically available satellite state information, such as orbital positions and satellite attitude, which is inherently accessible in satellite networks, and employs a dual-branch tensor-equivariant network to predict the precoders at each satellite locally. Numerical results demonstrate that the proposed multi-satellite transmission significantly outperforms single-satellite systems in sum rate; the decentralized scheme achieves sum-rate performance close to the centralized schemes while substantially reducing computational complexity and inter-satellite overhead; and the learning-based schemes exhibit strong robustness and scalability across different scenarios.
Abstract:The increasing complexity of neural networks poses significant challenges for democratizing FL on resource?constrained client devices. Parallel split learning (PSL) has emerged as a promising solution by offloading substantial computing workload to a server via model partitioning, shrinking client-side computing load, and eliminating the client-side model aggregation for reduced communication and deployment costs. Since PSL is aggregation-free, it suffers from severe training divergence stemming from gradient directional inconsistency across clients. To address this challenge, we propose GAPSL, a gradient-aligned PSL framework that comprises two key components: leader gradient identification (LGI) and gradient direction alignment (GDA). LGI dynamically selects a set of directionally consistent client gradients to construct a leader gradient that captures the global convergence trend. GDA employs a direction-aware regularization to align each client's gradient with the leader gradient, thereby mitigating inter-device gradient directional inconsistency and enhancing model convergence. We evaluate GAPSL on a prototype computing testbed. Extensive experiments demonstrate that GAPSL consistently outperforms state-of-the-art benchmarks in training accuracy and latency.
Abstract:In Earth observation (EO) missions with Low Earth orbit (LEO) satellites, high-resolution image acquisition generates a massive data volume that poses a significant challenge for transmission under the limited satellite power budget, while LEO movement introduces dynamic systems. To enable efficient image transmission, this paper employs semantic communication (SemCom) with joint source-channel coding (JSCC), which focuses on transmitting meaningful information to reduce power consumption. Under a quality-of-service (QoS) requirement defined by image reconstruction quality, this work aims to minimize the total transmit power by jointly optimizing the JSCC encoder-decoder parameters and resource allocation. However, the implicit relationship among JSCC parameters, link quality, and image quality, coupled with the presence of mixed integer-continuous variables, makes the problem difficult to solve directly. To address this, a curve-fitting model is proposed to approximate the JSCC compression-SNR-quality relationship. Then, the joint compression ratio-resource allocation (JCRRA) algorithm is proposed to address the underlying problem. Numerical results demonstrate that the proposed method achieves substantial power savings compared to both greedy algorithms and conventional transmission paradigms.
Abstract:Deep-learning (DL)-based precoding in multi-user multiple-input single-output (MU-MISO) systems involves training DL models to map features derived from channel coefficients to labels derived from precoding weights. Traditionally, complex-valued channel and precoder coefficients are parameterized using either their real and imaginary components or their amplitude and phase. However, precoding performance depends on magnitudes of inner products between channel and precoding vectors, which are invariant to global phase rotations. Conventional representations fail to exploit this symmetry, leading to inefficient learning and degraded generalization. To address this, we propose a DL framework based on complex projective space (CPS) parameterizations of both the wireless channel and the weighted minimum mean squared error (WMMSE) precoder vectors. By removing the global phase redundancies inherent in conventional representations, the proposed framework enables the DL model to learn geometry-aligned and physically distinct channel-precoder mappings. Two CPS parameterizations based on real-valued embeddings and complex hyperspherical coordinates are investigated and benchmarked against two baseline methods. Simulation results demonstrate substantial improvements in sum-rate performance and generalization, with negligible increase in model complexity.
Abstract:Remote and resource-constrained Internet-of-Things (IoT) deployments often lack terrestrial connectivity for task offloading, motivating non-terrestrial networks (NTNs) with onboard multiaccess edge computing (MEC) capabilities. Nevertheless, in the presence of malicious actors, authentication needs to be performed to avoid non-authorized nodes from draining the computing resources of the NTN nodes. As a solution, we propose a four-layer MEC-enabled NTN with unmanned aerial vehicles (UAVs) acting as access nodes, a high altitude platform station (HAPS) acting as coordinator and authenticator, and a constellation of low-Earth orbit satellites (LEOSats) acting as remote MEC servers. We consider a tag-based physical-layer authentication (PLA) scheme to authenticate legitimate users, and formulate a joint task offloading decision and resource allocation for the admitted tasks, which is solved via block coordinate descent. Numerical results show that the PLA scheme is efficient and performs better than the benchmark schemes. We also demonstrate that the proposed scheme is robust against malicious attacks even under relaxed false-alarm constraints.
Abstract:The joint communications and sensing (JCAS) paradigm is envisioned as a core capability of sixth-generation (6G) wireless networks, enabling the integration of data communication and environmental sensing within a unified system. By reusing spectrum, waveforms, and hardware resources, JCAS improves spectral efficiency, reduces system complexity, and hardware cost, while enabling new use cases. Nevertheless, the realization of JCAS is hindered by inherent trade-offs between communication and sensing objectives, limited controllability of wireless propagation, and stringent hardware and design constraints. Simultaneously transmitting and reflecting reconfigurable intelligent surfaces (STAR-RIS) have recently emerged as a promising technology to address these challenges by enabling full-space programmable manipulation of electromagnetic waves. This survey provides a systematic and in-depth review of STAR-RIS-enabled JCAS systems. Specifically, we first introduce the fundamental principles of JCAS and STAR-RIS. We then classify and review the state-of-the-art research on STAR-RIS-assisted JCAS from multiple perspectives, encompassing system architectures, waveform and beamforming design, resource allocation, optimization frameworks, and learning-based control. Finally, we identify key open challenges that remain unsolved and outline promising future research directions toward intelligent, flexible, and perceptive 6G wireless networks.
Abstract:In this work, we propose an intelligent optimization framework for a multi-user communication system integrating movable antennas (MAs) and a reconfigurable intelligent surface (RIS) under the rate-splitting multiple access (RSMA) protocol. The system sum-rate is maximized through joint optimization of transmit precoding vectors, RIS reflection matrix, common-rate allocation, and MA positions, subject to quality-of-service (QoS), power-budget, common-rate decoding, and mutual coupling constraints. Imperfect channel state information (CSI) is considered for all links, where robustness is ensured by modeling channel estimation errors within a bounded uncertainty region, guaranteeing worst-case performance reliability. The resulting non-convex problem is solved using an alternating optimization framework. The precoding subproblem is reformulated as a semidefinite programming (SDP) problem via linear matrix inequalities derived using the S-procedure. The RIS reflection matrix is optimized using successive convex approximation (SCA), yielding an equivalent SDP formulation. The MA position optimization is addressed through SCA combined with block coordinate descent (BCD) method. Numerical results validate the effectiveness of the proposed framework and demonstrate fast convergence.
Abstract:The explosive growth in wireless service demand has prompted the evolution of integrated satellite-terrestrial networks (ISTNs) to overcome the limitations of traditional terrestrial networks (TNs) in terms of coverage, spectrum efficiency, and deployment cost. Particularly, leveraging LEO satellites and dynamic spectrum sharing (DSS), ISTNs offer promising solutions but face significant challenges due to diverse terrestrial environments, user and satellite mobility, and long propagation LEO-to-ground distance. To address these challenges, digitial-twin (DT) has emerged as a promising technology to offer virtual replicas of real-world systems, facilitating prediction for resource management. In this work, we study a time-window-based DT-aided DSS framework for ISTNs, enabling joint long-term and short-term resource decisions to reduce system congestion. Based on that, two optimization problems are formulated, which aim to optimize resource management using DT information and to refine obtained solutions with actual real-time information, respectively. To efficiently solve these problems, we proposed algorithms using compressed-sensing-based and successive convex approximation techniques. Simulation results using actual traffic data and the London 3D map demonstrate the superiority in terms of congestion minimization of our proposed algorithms compared to benchmarks. Additionally, it shows the adaptation ability and practical feasibility of our proposed solutions.
Abstract:This paper proposes a hybrid beamforming framework for massive multiple-input multiple-output (MIMO) in near-space airship-borne communications. To achieve high energy efficiency (EE) in energy-constraint airships, a dynamic subarray structure is introduced, where each radio frequency chain (RFC) is connected to a disjoint subset of the antennas according to channel state information (CSI). The proposed joint dynamic hybrid beamforming network (DyHBFNet) comprises three key components: 1) An analog beamforming network (ABFNet) that optimizes the analog beamforming matrices and provides auxiliary information for the antenna selection network (ASNet) design, 2) an ASNet that dynamically optimizes the connections between antennas and RFCs, and 3) a digital beamforming network (DBFNet) that optimizes digital beamforming matrices by employing a model-driven weighted minimum mean square error algorithm for improving beamforming performance and convergence speed. The proposed ABFNet, ASNet, and DBFNet are all designed based on advanced Transformer encoders. Simulation results demonstrate that the proposed framework significantly enhances spectral efficiency and EE compared to baseline schemes. Additionally, its robust performance under imperfect CSI makes it a scalable solution for practical implementations.
Abstract:Modern Earth Observation (EO) systems increasingly rely on high-resolution imagery to support critical applications such as environmental monitoring, disaster response, and land-use analysis. Although these applications benefit from detailed visual data, the resulting data volumes impose significant challenges on satellite communication systems constrained by limited bandwidth, power, and dynamic link conditions. To address these limitations, this paper investigates Deep Joint Source-Channel Coding (DJSCC) as an effective source-channel paradigm for the transmission of EO imagery. We focus on two complementary aspects of semantic loss in DJSCC-based systems. First, a reconstruction-centric framework is evaluated by analyzing the semantic degradation of reconstructed images under varying compression ratios and channel signal-to-noise ratios (SNR). Second, a task-oriented framework is developed by integrating DJSCC with lightweight, application-specific models (e.g., EfficientViT), with performance measured using downstream task accuracy rather than pixel-level fidelity. Based on extensive empirical analysis, we propose a unified semantic loss framework that captures both reconstruction-centric and task-oriented performance within a single model. This framework characterizes the implicit relationship between JSCC compression, channel SNR, and semantic quality, offering actionable insights for the design of robust and efficient EO imagery transmission under resource-constrained satellite links.