Department of Electronic Systems, Aalborg University, Denmark
Abstract:6G must be designed to withstand, adapt to, and evolve amid prolonged, complex disruptions. Mobile networks' shift from efficiency-first to sustainability-aware has motivated this white paper to assert that resilience is a primary design goal, alongside sustainability and efficiency, encompassing technology, architecture, and economics. We promote resilience by analysing dependencies between mobile networks and other critical systems, such as energy, transport, and emergency services, and illustrate how cascading failures spread through infrastructures. We formalise resilience using the 3R framework: reliability, robustness, resilience. Subsequently, we translate this into measurable capabilities: graceful degradation, situational awareness, rapid reconfiguration, and learning-driven improvement and recovery. Architecturally, we promote edge-native and locality-aware designs, open interfaces, and programmability to enable islanded operations, fallback modes, and multi-layer diversity (radio, compute, energy, timing). Key enablers include AI-native control loops with verifiable behaviour, zero-trust security rooted in hardware and supply-chain integrity, and networking techniques that prioritise critical traffic, time-sensitive flows, and inter-domain coordination. Resilience also has a techno-economic aspect: open platforms and high-quality complementors generate ecosystem externalities that enhance resilience while opening new markets. We identify nine business-model groups and several patterns aligned with the 3R objectives, and we outline governance and standardisation. This white paper serves as an initial step and catalyst for 6G resilience. It aims to inspire researchers, professionals, government officials, and the public, providing them with the essential components to understand and shape the development of 6G resilience.
Abstract:This paper investigates the deployment of radio stripe systems for indoor radio-frequency (RF) wireless power transfer (WPT) in line-of-sight near-field scenarios. The focus is on environments where energy demand is concentrated in specific areas, referred to as 'hotspots', spatial zones with higher user density or consistent energy requirements. We formulate a joint clustering and radio stripe deployment problem that aims to maximize the minimum received power across all hotspots. To address the complexity, we decouple the problem into two stages: i) clustering for assigning radio stripes to hotspots based on their spatial positions and near-field propagation characteristics, and ii) antenna element placement optimization. In particular, we propose four radio stripe deployment algorithms. Two are based on general successive convex approximation (SCA) and signomial programming (SGP) methods. The other two are shape-constrained solutions where antenna elements are arranged along either straight lines or regular polygons, enabling simpler deployment. Numerical results show that the proposed clustering method converges effectively, with Chebyshev initialization significantly outperforming random initialization. The optimized deployments consistently outperform baseline benchmarks across a wide range of frequencies and radio stripe lengths, while the polygon-shaped deployment achieves better performance compared to other approaches. Meanwhile, the line-shaped deployment demonstrates an advantage under high boresight gain settings, benefiting from increased spatial diversity and broader angular coverage.
Abstract:Quantum computing is poised to redefine the algorithmic foundations of communication systems. While quantum superposition and entanglement enable quadratic or exponential speedups for specific problems, identifying use cases where these advantages yield engineering benefits is, however, still nontrivial. This article presents the fundamentals of quantum computing in a style familiar to the communications society, outlining the current limits of fault-tolerant quantum computing and uncovering a mathematical harmony between quantum and wireless systems, which makes the topic more enticing to wireless researchers. Based on a systematic review of pioneering and state-of-the-art studies, we distill common design trends for the research and development of quantum-accelerated communication systems and highlight lessons learned. The key insight is that classical heuristics can sharpen certain quantum parameters, underscoring the complementary strengths of classical and quantum computing. This article aims to catalyze interdisciplinary research at the frontier of quantum information processing and future communication systems.
Abstract:Flexible and efficient wireless resource sharing across heterogeneous services is a key objective for future wireless networks. In this context, we investigate the performance of a system where latency-constrained internet-of-things (IoT) devices coexist with a broadband user. The base station adopts a grant-free access framework to manage resource allocation, either through orthogonal radio access network (RAN) slicing or by allowing shared access between services. For the IoT users, we propose a reinforcement learning (RL) approach based on double Q-Learning (QL) to optimise their repetition-based transmission strategy, allowing them to adapt to varying levels of interference and meet a predefined latency target. We evaluate the system's performance in terms of the cumulative distribution function of IoT users' latency, as well as the broadband user's throughput and energy efficiency (EE). Our results show that the proposed RL-based access policies significantly enhance the latency performance of IoT users in both RAN Slicing and RAN Sharing scenarios, while preserving desirable broadband throughput and EE. Furthermore, the proposed policies enable RAN Sharing to be energy-efficient at low IoT traffic levels, and RAN Slicing to be favourable under high IoT traffic.
Abstract:Mobile users are prone to experience beam failure due to beam drifting in millimeter wave (mmWave) communications. Sensing can help alleviate beam drifting with timely beam changes and low overhead since it does not need user feedback. This work studies the problem of optimizing sensing-aided communication by dynamically managing beams allocated to mobile users. A multi-beam scheme is introduced, which allocates multiple beams to the users that need an update on the angle of departure (AoD) estimates and a single beam to the users that have satisfied AoD estimation precision. A deep reinforcement learning (DRL) assisted method is developed to optimize the beam allocation policy, relying only upon the sensing echoes. For comparison, a heuristic AoD-based method using approximated Cram\'er-Rao lower bound (CRLB) for allocation is also presented. Both methods require neither user feedback nor prior state evolution information. Results show that the DRL-assisted method achieves a considerable gain in throughput than the conventional beam sweeping method and the AoD-based method, and it is robust to different user speeds.
Abstract:We address the channel estimation problem in reconfigurable intelligent surface (RIS) aided broadband systems by proposing a dual-structure and multi-dimensional transformations (DS-MDT) algorithm. The proposed approach leverages the dual-structure features of the channel parameters to assist users experiencing weaker channel conditions, thereby enhancing estimation performance. Moreover, given that the channel parameters are distributed across multiple dimensions of the received tensor, the proposed algorithm employs multi-dimensional transformations to effectively isolate and extract distinct parameters. The numerical results demonstrate the proposed algorithm reduces the normalized mean square error (NMSE) by up to 10 dB while maintaining lower complexity compared to state-of-the-art methods.
Abstract:This paper explores a multi-antenna dual-functional radio frequency (RF) wireless power transfer (WPT) and radar system to charge multiple unresponsive devices. We formulate a beamforming problem to maximize the minimum received power at the devices without prior location and channel state information (CSI) knowledge. We propose dividing transmission blocks into sensing and charging phases. First, the location of the devices is estimated by sending sensing signals and performing multiple signal classification and least square estimation on the received echo. Then, the estimations are used for CSI prediction and RF-WPT beamforming. Simulation results reveal that there is an optimal number of blocks allocated for sensing and charging depending on the system setup. Our sense-then-charge (STC) protocol can outperform CSI-free benchmarks and achieve near-optimal performance with a sufficient number of receive antennas and transmit power. However, STC struggles if using insufficient antennas or power as device numbers grow.
Abstract:We investigate a lossy source compression problem in which both the encoder and decoder are equipped with a pre-trained sequence predictor. We propose an online lossy compression scheme that, under a 0-1 loss distortion function, ensures a deterministic, per-sequence upper bound on the distortion (outage) level for any time instant. The outage guarantees apply irrespective of any assumption on the distribution of the sequences to be encoded or on the quality of the predictor at the encoder and decoder. The proposed method, referred to as online conformal compression (OCC), is built upon online conformal prediction--a novel method for constructing confidence intervals for arbitrary predictors. Numerical results show that OCC achieves a compression rate comparable to that of an idealized scheme in which the encoder, with hindsight, selects the optimal subset of symbols to describe to the decoder, while satisfying the overall outage constraint.
Abstract:Modern software-defined networks, such as Open Radio Access Network (O-RAN) systems, rely on artificial intelligence (AI)-powered applications running on controllers interfaced with the radio access network. To ensure that these AI applications operate reliably at runtime, they must be properly calibrated before deployment. A promising and theoretically grounded approach to calibration is conformal prediction (CP), which enhances any AI model by transforming it into a provably reliable set predictor that provides error bars for estimates and decisions. CP requires calibration data that matches the distribution of the environment encountered during runtime. However, in practical scenarios, network controllers often have access only to data collected under different contexts -- such as varying traffic patterns and network conditions -- leading to a mismatch between the calibration and runtime distributions. This paper introduces a novel methodology to address this calibration-test distribution shift. The approach leverages meta-learning to develop a zero-shot estimator of distribution shifts, relying solely on contextual information. The proposed method, called meta-learned context-dependent weighted conformal prediction (ML-WCP), enables effective calibration of AI applications without requiring data from the current context. Additionally, it can incorporate data from multiple contexts to further enhance calibration reliability.
Abstract:This paper studies Federated Learning (FL) in low Earth orbit (LEO) satellite constellations, where satellites are connected via intra-orbit inter-satellite links (ISLs) to their neighboring satellites. During the FL training process, satellites in each orbit forward gradients from nearby satellites, which are eventually transferred to the parameter server (PS). To enhance the efficiency of the FL training process, satellites apply in-network aggregation, referred to as incremental aggregation. In this work, the gradient sparsification methods from [1] are applied to satellite scenarios to improve bandwidth efficiency during incremental aggregation. The numerical results highlight an increase of over 4 x in bandwidth efficiency as the number of satellites in the orbital plane increases.