Data markets facilitate decentralized data exchange for applications such as prediction, learning, or inference. The design of these markets is challenged by varying privacy preferences as well as data similarity among data owners. Related works have often overlooked how data similarity impacts pricing and data value through statistical information leakage. We demonstrate that data similarity and privacy preferences are integral to market design and propose a query-response protocol using local differential privacy for a two-party data acquisition mechanism. In our regression data market model, we analyze strategic interactions between privacy-aware owners and the learner as a Stackelberg game over the asked price and privacy factor. Finally, we numerically evaluate how data similarity affects market participation and traded data value.
The setup considered in the paper consists of sensors in a Networked Control System that are used to build a digital twin (DT) model of the system dynamics. The focus is on control, scheduling, and resource allocation for sensory observation to ensure timely delivery to the DT model deployed in the cloud. Low latency and communication timeliness are instrumental in ensuring that the DT model can accurately estimate and predict system states. However, acquiring data for efficient state estimation and control computing poses a non-trivial problem given the limited network resources, partial state vector information, and measurement errors encountered at distributed sensors. We propose the REinforcement learning and Variational Extended Kalman filter with Robust Belief (REVERB), which leverages a reinforcement learning solution combined with a Value of Information-based algorithm for performing optimal control and selecting the most informative sensors to satisfy the prediction accuracy of DT. Numerical results demonstrate that the DT platform can offer satisfactory performance while reducing the communication overhead up to five times.
Spatially correlated device activation is a typical feature of the Internet of Things (IoT). This motivates the development of channel scheduling (CS) methods that mitigate device collisions efficiently in such scenarios, which constitutes the scope of this work. Specifically, we present a quadratic program (QP) formulation for the CS problem considering the joint activation probabilities among devices. This formulation allows the devices to stochastically select the transmit channels, thus, leading to a soft-clustering approach. We prove that the optimal QP solution can only be attained when it is transformed into a hard-clustering problem, leading to a pure integer QP, which we transform into a pure integer linear program (PILP). We leverage the branch-and-cut (B&C) algorithm to solve PILP optimally. Due to the high computational cost of B&C, we resort to some sub-optimal clustering methods with low computational costs to tackle the clustering problem in CS. Our findings demonstrate that the CS strategy, sourced from B&C, significantly outperforms those derived from sub-optimal clustering methods, even amidst increased device correlation.
Consider an active learning setting in which a learner has a training set with few labeled examples and a pool set with many unlabeled inputs, while a remote teacher has a pre-trained model that is known to perform well for the learner's task. The learner actively transmits batches of unlabeled inputs to the teacher through a constrained communication channel for labeling. This paper addresses the following key questions: (i) Active batch selection: Which batch of inputs should be sent to the teacher to acquire the most useful information and thus reduce the number of required communication rounds? (ii) Batch encoding: How do we encode the batch of inputs for transmission to the teacher to reduce the communication resources required at each round? We introduce Communication-Constrained Bayesian Active Knowledge Distillation (CC-BAKD), a novel protocol that integrates Bayesian active learning with compression via a linear mix-up mechanism. Bayesian active learning selects the batch of inputs based on their epistemic uncertainty, addressing the "confirmation bias" that is known to increase the number of required communication rounds. Furthermore, the proposed mix-up compression strategy is integrated with the epistemic uncertainty-based active batch selection process to reduce the communication overhead per communication round.
This work considers a scenario in which an edge server collects data from Internet of Things (IoT) devices equipped with wake-up receivers. Although this procedure enables on-demand data collection, there is still energy waste if the content of the transmitted data following the wake-up is irrelevant. To mitigate this, we advocate the use of Tiny Machine Learning (ML) to enable a semantic response from the IoT devices, so they can send only semantically relevant data. Nevertheless, receiving the ML model and the ML processing at the IoT devices consumes additional energy. We consider the specific instance of image retrieval and investigate the gain brought by the proposed scheme in terms of energy efficiency, considering both the energy cost of introducing the ML model as well as that of wireless communication. The numerical evaluation shows that, compared to a baseline scheme, the proposed scheme can realize both high retrieval accuracy and high energy efficiency, which reaches up to 70% energy reduction when the number of stored images is equal to or larger than 8.
One of the primary goals of future wireless systems is to foster sustainability, for which, radio frequency (RF) wireless power transfer (WPT) is considered a key technology enabler. The key challenge of RF-WPT systems is the extremely low end-to-end efficiency, mainly due to the losses introduced by the wireless channel. Distributed antenna systems are undoubtedly appealing as they can significantly shorten the charging distances, thus, reducing channel losses. Interestingly, radio stripe systems provide a cost-efficient and scalable way to deploy a distributed multi-antenna system, and thus have received a lot of attention recently. Herein, we consider an RF-WPT system with a transmit radio stripe network to charge multiple indoor energy hotspots, i.e., spatial regions where the energy harvesting devices are expected to be located, including near-field locations. We formulate the optimal radio stripe deployment problem aimed to maximize the minimum power received by the users and explore two specific predefined shapes, namely the straight line and polygon-shaped configurations. Then, we provide efficient solutions relying on geometric programming to optimize the location of the radio stripe elements. The results demonstrate that the proposed radio stripe deployments outperform a central fully-digital square array with the same number of elements and utilizing larger radio stripe lengths can enhance the performance, while increasing the system frequency may degrade it.
We propose an efficient solution to the state estimation problem in multi-scan multi-sensor multiple extended target sensing scenarios. We first model the measurement process by a doubly inhomogeneous-generalized shot noise Cox process and then estimate the parameters using a jump Markov chain Monte Carlo sampling technique. The proposed approach scales linearly in the number of measurements and can take spatial properties of the sensors into account, herein, sensor noise covariance, detection probability, and resolution. Numerical experiments using radar measurement data suggest that the algorithm offers improvements in high clutter scenarios with closely spaced targets over state-of-the-art clustering techniques used in existing multiple extended target tracking algorithms.
Low-latency communication plays an increasingly important role in delay-sensitive applications by ensuring the real-time exchange of information. However, due to the constraints on the maximum instantaneous power, bounded latency is hard to be guaranteed. In this paper, we investigate the reliability-latency-rate tradeoff in low-latency communications with finite-blocklength coding (FBC). More specifically, we are interested in the fundamental tradeoff between error probability, delay-violation probability (DVP), and service rate. Based on the effective capacity (EC) and normal approximation, we present several gain-conservation inequalities to bound the reliability-latency-rate tradeoffs. In particular, we investigate the low-latency transmissions over an additive white Gaussian noise (AWGN) channel, over a Rayleigh fading channel, with frequency or spatial diversity, and over a Nakagami-$m$ fading channel. To analytically evaluate the quality-of-service-constrained low-latency communications with FBC, an EC-approximation method is further conceived to derive the closed-form expression of quality-of-service-constrained throughput. For delay-sensitive transmissions in which the latency threshold is greater than the channel coherence time, we find an asymptotic form of the tradeoff between the error probability and DVP over the AWGN and Rayleigh fading channels. Our results may provide some insights into the efficient scheduling of low-latency wireless communications in which statistical latency and reliability metrics are adopted.
Location information is often used as a proxy to guarantee the performance of a wireless communication link. However, localization errors can result in a significant mismatch with the guarantees, particularly detrimental to users operating the ultra-reliable low-latency communication (URLLC) regime. This paper unveils the fundamental statistical relations between location estimation uncertainty and wireless link reliability, specifically in the context of rate selection for ultra-reliable communication. We start with a simple one-dimensional narrowband Rayleigh fading scenario and build towards a two-dimensional scenario in a rich scattering environment. The wireless link reliability is characterized by the meta-probability, the probability with respect to localization error of exceeding the outage capacity, and by removing other sources of errors in the system, we show that reliability is sensitive to localization errors. The $\epsilon$-outage coherence radius is defined and shown to provide valuable insight into the problem of location-based rate selection. However, it is generally challenging to guarantee reliability without accurate knowledge of the propagation environment. Finally, several rate-selection schemes are proposed, showcasing the problem's dynamics and revealing that properly accounting for the localization error is critical to ensure good performance in terms of reliability and achievable throughput.
5G has expanded the traditional focus of wireless systems to embrace two new connectivity types: ultra-reliable low latency and massive communication. The technology context at the dawn of 6G is different from the past one for 5G, primarily due to the growing intelligence at the communicating nodes. This has driven the set of relevant communication problems beyond reliable transmission towards semantic and pragmatic communication. This paper puts the evolution of low-latency and massive communication towards 6G in the perspective of these new developments. At first, semantic/pragmatic communication problems are presented by drawing parallels to linguistics. We elaborate upon the relation of semantic communication to the information-theoretic problems of source/channel coding, while generalized real-time communication is put in the context of cyber-physical systems and real-time inference. The evolution of massive access towards massive closed-loop communication is elaborated upon, enabling interactive communication, learning, and cooperation among wireless sensors and actuators.