Abstract:The use of lightweight machine learning (ML) models in internet of things (IoT) networks enables resource constrained IoT devices to perform on-device inference for several critical applications. However, the inference accuracy deteriorates due to the non-stationarity in the IoT environment and limited initial training data. To counteract this, the deployed models can be updated occasionally with new observed data samples. However, this approach consumes additional energy, which is undesirable for energy constrained IoT devices. This letter introduces an event-driven communication framework that strategically integrates continual learning (CL) in IoT networks for energy-efficient fault detection. Our framework enables the IoT device and the edge server (ES) to collaboratively update the lightweight ML model by adapting to the wireless link conditions for communication and the available energy budget. Evaluation on real-world datasets show that the proposed approach can outperform both periodic sampling and non-adaptive CL in terms of inference recall; our proposed approach achieves up to a 42.8% improvement, even under tight energy and bandwidth constraint.
Abstract:This paper presents an energy-efficient transmission framework for federated learning (FL) in industrial Internet of Things (IIoT) environments with strict latency and energy constraints. Machinery subnetworks (SNs) collaboratively train a global model by uploading local updates to an edge server (ES), either directly or via neighboring SNs acting as decode-and-forward relays. To enhance communication efficiency, relays perform partial aggregation before forwarding the models to the ES, significantly reducing overhead and training latency. We analyze the convergence behavior of this relay-assisted FL scheme. To address the inherent energy efficiency (EE) challenges, we decompose the original non-convex optimization problem into sub-problems addressing computation and communication energy separately. An SN grouping algorithm categorizes devices into single-hop and two-hop transmitters based on latency minimization, followed by a relay selection mechanism. To improve FL reliability, we further maximize the number of SNs that meet the roundwise delay constraint, promoting broader participation and improved convergence stability under practical IIoT data distributions. Transmit power levels are then optimized to maximize EE, and a sequential parametric convex approximation (SPCA) method is proposed for joint configuration of system parameters. We further extend the EE formulation to the imperfect channel state information (ICSI). Simulation results demonstrate that the proposed framework significantly enhances convergence speed, reduces outage probability from 10-2 in single-hop to 10-6 and achieves substantial energy savings, with the SPCA approach reducing energy consumption by at least 2x compared to unaggregated cooperation and up to 6x over single-hop transmission.
Abstract:Video streaming services depend on the underlying communication infrastructure and available network resources to offer ultra-low latency, high-quality content delivery. Open Radio Access Network (ORAN) provides a dynamic, programmable, and flexible RAN architecture that can be configured to support the requirements of time-critical applications. This work considers a setup in which the constrained network resources are supplemented by \gls{GAI} and \gls{MEC} {techniques} in order to reach a satisfactory video quality. Specifically, we implement a novel semantic control channel that enables \gls{MEC} to support low-latency applications by tight coupling among the ORAN xApp, \gls{MEC}, and the control channel. The proposed concepts are experimentally verified with an actual ORAN setup that supports video streaming. The performance evaluation includes the \gls{PSNR} metric and end-to-end latency. Our findings reveal that latency adjustments can yield gains in image \gls{PSNR}, underscoring the trade-off potential for optimized video quality in resource-limited environments.
Abstract:The immense volume of data generated by Earth observation (EO) satellites presents significant challenges in transmitting it to Earth over rate-limited satellite-to-ground communication links. This paper presents an efficient downlink framework for multi-spectral satellite images, leveraging adaptive transmission techniques based on pixel importance and link capacity. By integrating semantic communication principles, the framework prioritizes critical information, such as changed multi-spectral pixels, to optimize data transmission. The process involves preprocessing, assessing pixel importance to encode only significant changes, and dynamically adjusting transmissions to match channel conditions. Experimental results on the real dataset and simulated link demonstrate that the proposed approach ensures high-quality data delivery while significantly reducing number of transmitted data, making it highly suitable for satellite-based EO applications.




Abstract:Training a high-quality Federated Learning (FL) model at the network edge is challenged by limited transmission resources. Although various device scheduling strategies have been proposed, it remains unclear how scheduling decisions affect the FL model performance under temporal constraints. This is pronounced when the wireless medium is shared to enable the participation of heterogeneous Internet of Things (IoT) devices with distinct communication modes: (1) a scheduling (pull) scheme, that selects devices with valuable updates, and (2) random access (push), in which interested devices transmit model parameters. The motivation for pushing data is the improved representation of own data distribution within the trained FL model and thereby better generalization. The scheduling strategy affects the transmission opportunities for push-based communication during the access phase, extending the number of communication rounds required for model convergence. This work investigates the interplay of push-pull interactions in a time-constrained FL setting, where the communication opportunities are finite, with a utility-based analytical model. Using real-world datasets, we provide a performance tradeoff analysis that validates the significance of strategic device scheduling under push-pull wireless access for several practical settings. The simulation results elucidate the impact of the device sampling strategy on learning efficiency under timing constraints.




Abstract:This paper presents a Digital Twin (DT) framework for the remote control of an Autonomous Guided Vehicle (AGV) within a Network Control System (NCS). The AGV is monitored and controlled using Integrated Sensing and Communications (ISAC). In order to meet the real-time requirements, the DT computes the control signals and dynamically allocates resources for sensing and communication. A Reinforcement Learning (RL) algorithm is derived to learn and provide suitable actions while adjusting for the uncertainty in the AGV's position. We present closed-form expressions for the achievable communication rate and the Cramer-Rao bound (CRB) to determine the required number of Orthogonal Frequency-Division Multiplexing (OFDM) subcarriers, meeting the needs of both sensing and communication. The proposed algorithm is validated through a millimeter-Wave (mmWave) simulation, demonstrating significant improvements in both control precision and communication efficiency.




Abstract:We consider a Wireless Networked Control System (WNCS) where sensors provide observations to build a DT model of the underlying system dynamics. The focus is on control, scheduling, and resource allocation for sensory observation to ensure timely delivery to the DT model deployed in the cloud. \phuc{Timely and relevant information, as characterized by optimized data acquisition policy and low latency, are instrumental in ensuring that the DT model can accurately estimate and predict system states. However, optimizing closed-loop control with DT and acquiring data for efficient state estimation and control computing pose a non-trivial problem given the limited network resources, partial state vector information, and measurement errors encountered at distributed sensing agents.} To address this, we propose the \emph{Age-of-Loop REinforcement learning and Variational Extended Kalman filter with Robust Belief (AoL-REVERB)}, which leverages an uncertainty-control reinforcement learning solution combined with an algorithm based on Value of Information (VoI) for performing optimal control and selecting the most informative sensors to satisfy the prediction accuracy of DT. Numerical results demonstrate that the DT platform can offer satisfactory performance while halving the communication overhead.




Abstract:The paper examines a scenario wherein sensors are deployed within an Industrial Networked Control System, aiming to construct a digital twin (DT) model for a remotely operated Autonomous Guided Vehicle (AGV). The DT model, situated on a cloud platform, estimates and predicts the system's state, subsequently formulating the optimal scheduling strategy for execution in the physical world. However, acquiring data crucial for efficient state estimation and control computation poses a significant challenge, primarily due to constraints such as limited network resources, partial observation, and the necessity to maintain a certain confidence level for DT estimation. We propose an algorithm based on Value of Information (VoI), seamlessly integrated with the Extended Kalman Filter to deliver a polynomial-time solution, selecting the most informative subset of sensing agents for data. Additionally, we put forth an alternative solution leveraging a Graph Neural Network to precisely ascertain the AGV's position with a remarkable accuracy of up to 5 cm. Our experimental validation in an industrial robotic laboratory environment yields promising results, underscoring the potential of high-accuracy DT models in practice.




Abstract:We consider a setup with Internet of Things (IoT), where a base station (BS) collects data from nodes that use two different communication modes. The first is pull-based, where the BS retrieves the data from specific nodes through queries. In addition, the nodes that apply pull-based communication contain a wake-up receiver: upon a query, the BS sends wake-up signal (WuS) to activate the corresponding devices equipped with wake-up receiver (WuDs). The second one is push-based communication, in which the nodes decide when to send to the BS. Consider a time-slotted model, where the time slots in each frame are shared for both pull-based and push-based communications. Therein, this coexistence scenario gives rise to a new type of problem with fundamental trade-offs in sharing communication resources: the objective to serve a maximum number of queries, within a specified deadline, limits the transmission opportunities for push sensors, and vice versa. This work develops a mathematical model that characterizes these trade-offs, validates them through simulations, and optimizes the frame design to meet the objectives of both the pull- and push-based communications.


Abstract:The standard client selection algorithms for Federated Learning (FL) are often unbiased and involve uniform random sampling of clients. This has been proven sub-optimal for fast convergence under practical settings characterized by significant heterogeneity in data distribution, computing, and communication resources across clients. For applications having timing constraints due to limited communication opportunities with the parameter server (PS), the client selection strategy is critical to complete model training within the fixed budget of communication rounds. To address this, we develop a biased client selection strategy, GreedyFed, that identifies and greedily selects the most contributing clients in each communication round. This method builds on a fast approximation algorithm for the Shapley Value at the PS, making the computation tractable for real-world applications with many clients. Compared to various client selection strategies on several real-world datasets, GreedyFed demonstrates fast and stable convergence with high accuracy under timing constraints and when imposing a higher degree of heterogeneity in data distribution, systems constraints, and privacy requirements.