We consider a setup with Internet of Things (IoT), where a base station (BS) collects data from nodes that use two different communication modes. The first is pull-based, where the BS retrieves the data from specific nodes through queries. In addition, the nodes that apply pull-based communication contain a wake-up receiver: upon a query, the BS sends wake-up signal (WuS) to activate the corresponding devices equipped with wake-up receiver (WuDs). The second one is push-based communication, in which the nodes decide when to send to the BS. Consider a time-slotted model, where the time slots in each frame are shared for both pull-based and push-based communications. Therein, this coexistence scenario gives rise to a new type of problem with fundamental trade-offs in sharing communication resources: the objective to serve a maximum number of queries, within a specified deadline, limits the transmission opportunities for push sensors, and vice versa. This work develops a mathematical model that characterizes these trade-offs, validates them through simulations, and optimizes the frame design to meet the objectives of both the pull- and push-based communications.
Neuromorphic computing leverages the sparsity of temporal data to reduce processing energy by activating a small subset of neurons and synapses at each time step. When deployed for split computing in edge-based systems, remote neuromorphic processing units (NPUs) can reduce the communication power budget by communicating asynchronously using sparse impulse radio (IR) waveforms. This way, the input signal sparsity translates directly into energy savings both in terms of computation and communication. However, with IR transmission, the main contributor to the overall energy consumption remains the power required to maintain the main radio on. This work proposes a novel architecture that integrates a wake-up radio mechanism within a split computing system consisting of remote, wirelessly connected, NPUs. A key challenge in the design of a wake-up radio-based neuromorphic split computing system is the selection of thresholds for sensing, wake-up signal detection, and decision making. To address this problem, as a second contribution, this work proposes a novel methodology that leverages the use of a digital twin (DT), i.e., a simulator, of the physical system, coupled with a sequential statistical testing approach known as Learn Then Test (LTT) to provide theoretical reliability guarantees. The proposed DT-LTT methodology is broadly applicable to other design problems, and is showcased here for neuromorphic communications. Experimental results validate the design and the analysis, confirming the theoretical reliability guarantees and illustrating trade-offs among reliability, energy consumption, and informativeness of the decisions.
A wireless communication system is studied that operates in the presence of multiple reconfigurable intelligent surfaces (RISs). In particular, a multi-operator environment is considered where each operator utilizes an RIS to enhance its communication quality. Although out-of-band interference does not exist (since each operator uses isolated spectrum resources), RISs controlled by different operators do affect the system performance of one another due to the inherently rapid phase shift adjustments that occur on an independent basis. The system performance of such a communication scenario is analytically studied for the practical case where discrete-only phase shifts occur at RIS. The proposed framework is quite general since it is valid under arbitrary channel fading conditions as well as the presence (or not) of the transceiver's direct link. Finally, the derived analytical results are verified via numerical and simulation trial as well as some novel and useful engineering outcomes are manifested.
This paper presents an experimental validation for prediction of rare fading events using channel distribution information (CDI) maps that predict channel statistics from measurements acquired at surrounding locations using spatial interpolation. Using experimental channel measurements from 127 locations, we demonstrate the use case of providing statistical guarantees for rate selection in ultra-reliable low-latency communication (URLLC) using CDI maps. By using only the user location and the estimated map, we are able to meet the desired outage probability with a probability between 93.6-95.6% targeting 95%. On the other hand, a model-based baseline scheme that assumes Rayleigh fading meets the target outage requirement with a probability of 77.2%. The results demonstrate the practical relevance of CDI maps for resource allocation in URLLC.
Mega-constellations of small satellites have evolved into a source of massive amount of valuable data. To manage this data efficiently, on-board federated learning (FL) enables satellites to train a machine learning (ML) model collaboratively without having to share the raw data. This paper introduces a scheme for scheduling on-board FL for constellations connected with intra-orbit inter-satellite links. The proposed scheme utilizes the predictable visibility pattern between satellites and ground station (GS), both at the individual satellite level and cumulatively within the entire orbit, to mitigate intermittent connectivity and best use of available time. To this end, two distinct schedulers are employed: one for coordinating the FL procedures among orbits, and the other for controlling those within each orbit. These two schedulers cooperatively determine the appropriate time to perform global updates in GS and then allocate suitable duration to satellites within each orbit for local training, proportional to usable time until next global update. This scheme leads to improved test accuracy within a shorter time.
Recent advances in AI technologies have notably expanded device intelligence, fostering federation and cooperation among distributed AI agents. These advancements impose new requirements on future 6G mobile network architectures. To meet these demands, it is essential to transcend classical boundaries and integrate communication, computation, control, and intelligence. This paper presents the 6G-GOALS approach to goal-oriented and semantic communications for AI-Native 6G Networks. The proposed approach incorporates semantic, pragmatic, and goal-oriented communication into AI-native technologies, aiming to facilitate information exchange between intelligent agents in a more relevant, effective, and timely manner, thereby optimizing bandwidth, latency, energy, and electromagnetic field (EMF) radiation. The focus is on distilling data to its most relevant form and terse representation, aligning with the source's intent or the destination's objectives and context, or serving a specific goal. 6G-GOALS builds on three fundamental pillars: i) AI-enhanced semantic data representation, sensing, compression, and communication, ii) foundational AI reasoning and causal semantic data representation, contextual relevance, and value for goal-oriented effectiveness, and iii) sustainability enabled by more efficient wireless services. Finally, we illustrate two proof-of-concepts implementing semantic, goal-oriented, and pragmatic communication principles in near-future use cases. Our study covers the project's vision, methodologies, and potential impact.
In this paper, we investigate a cascaded channel estimation method for a millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) system aided by a reconfigurable intelligent surface (RIS) with the BS equipped with low-resolution analog-to-digital converters (ADCs), where the BS and the RIS are both equipped with a uniform planar array (UPA). Due to the sparse property of mmWave channel, the channel estimation can be solved as a compressed sensing (CS) problem. However, the low-resolution quantization cause severe information loss of signals, and traditional CS algorithms are unable to work well. To recovery the signal and the sparse angular domain channel from quantization, we introduce Bayesian inference and efficient vector approximate message passing (VAMP) algorithm to solve the quantize output CS problem. To further improve the efficiency of the VAMP algorithm, a Fast Fourier Transform (FFT) based fast computation method is derived. Simulation results demonstrate the effectiveness and the accuracy of the proposed cascaded channel estimation method for the RIS-aided mmWave massive MIMO system with few-bit ADCs. Furthermore, the proposed channel estimation method can reach an acceptable performance gap between the low-resolution ADCs and the infinite ADCs for the low signal-to-noise ratio (SNR), which implies the applicability of few-bit ADCs in practice.
Intelligent metasurfaces are one of the favorite technologies for integrating sixth-generation (6G) networks, especially the reconfigurable intelligent surface (RIS) that has been extensively researched in various applications. In this context, a feature that deserves further exploration is the frequency scattering that occurs when the elements are periodically switched, referred to as Space-Time-Coding metasurface (STCM) topology. This type of topology causes impairments to the established communication methods by generating undesirable interference both in frequency and space, which is worsened when using wideband signals. Nevertheless, it has the potential to bring forward useful features for sensing and localization. This work exploits STCM sensing capabilities in target detection, localization, and classification using narrowband downlink pilot signals at the base station (BS). The results of this novel approach reveal the ability to retrieve a scattering point (SP) localization within the sub-centimeter and sub-decimeter accuracy depending on the SP position in space. We also analyze the associated detection and classification probabilities, which show reliable detection performance in the whole analyzed environment. In contrast, the classification is bounded by physical constraints, and we conclude that this method presents a promising approach for future integrated sensing and communications (ISAC) protocols by providing a tool to perform sensing and localization services using legacy communication signals.
Driven by the development goal of network paradigm and demand for various functions in the sixth-generation (6G) mission-critical Internet-of-Things (MC-IoT), we foresee a goal-oriented integration of sensing, communication, computing, and control (GIS3C) in this paper. We first provide an overview of the tasks, requirements, and challenges of MC-IoT. Then we introduce an end-to-end GIS3C architecture, in which goal-oriented communication is leveraged to bridge and empower sensing, communication, control, and computing functionalities. By revealing the interplay among multiple subsystems in terms of key performance indicators and parameters, this paper introduces unified metrics, i.e., task completion effectiveness and cost, to facilitate S3C co-design in MC-IoT. The preliminary results demonstrate the benefits of GIS3C in improving task completion effectiveness while reducing costs. We also identify and highlight the gaps and challenges in applying GIS3C in the future 6G networks.
We consider a wireless networked control system (WNCS) with bidirectional imperfect links for real-time applications such as smart grids. To maintain the stability of WNCS, captured by the probability that plant state violates preset values, at minimal cost, heterogeneous physical processes are monitored by multiple sensors. This status information, such as dynamic plant state and Markov Process-based context information, is then received/estimated by the controller for remote control. However, scheduling multiple sensors and designing the controller with limited resources is challenging due to their coupling, delay, and transmission loss. We formulate a Constrained Markov Decision Problem (CMDP) to minimize violation probability with cost constraints. We reveal the relationship between the goal and different updating actions by analyzing the significance of information that incorporates goal-related usefulness and contextual importance. Subsequently, a goal-oriented deterministic scheduling policy is proposed. Two sensing-assisted control strategies and a control-aware estimation policy are proposed to improve the violation probability-cost tradeoff, integrated with the scheduling policy to form a goal-oriented co-design framework. Additionally, we explore retransmission in downlink transmission and qualitatively analyze its preference scenario. Simulation results demonstrate that the proposed goal-oriented co-design policy outperforms previous work in simultaneously reducing violation probability and cost