In this work, we study the problem of real-time tracking and reconstruction of an information source with the purpose of actuation. A device monitors an $N$-state Markov process and transmits status updates to a receiver over a wireless erasure channel. We consider a set of joint sampling and transmission policies, including a semantics-aware one, and we study their performance with respect to relevant metrics. Specifically, we investigate the real-time reconstruction error and its variance, the consecutive error, the cost of memory error, and the cost of actuation error. Furthermore, we propose a randomized stationary sampling and transmission policy and derive closed-form expressions for all aforementioned metrics. We then formulate an optimization problem for minimizing the real-time reconstruction error subject to a sampling cost constraint. Our results show that in the scenario of constrained sampling generation, the optimal randomized stationary policy outperforms all other sampling policies when the source is rapidly evolving. Otherwise, the semantics-aware policy performs the best.
The multi-user linearly-separable distributed computing problem is considered here, in which $N$ servers help to compute the real-valued functions requested by $K$ users, where each function can be written as a linear combination of up to $L$ (generally non-linear) subfunctions. Each server computes a fraction $\gamma$ of the subfunctions, then communicates a function of its computed outputs to some of the users, and then each user collects its received data to recover its desired function. Our goal is to bound the ratio between the computation workload done by all servers over the number of datasets. To this end, we here reformulate the real-valued distributed computing problem into a matrix factorization problem and then into a basic sparse recovery problem, where sparsity implies computational savings. Building on this, we first give a simple probabilistic scheme for subfunction assignment, which allows us to upper bound the optimal normalized computation cost as $\gamma \leq \frac{K}{N}$ that a generally intractable $\ell_0$-minimization would give. To bypass the intractability of such optimal scheme, we show that if these optimal schemes enjoy $\gamma \leq - r\frac{K}{N}W^{-1}_{-1}(- \frac{2K}{e N r} )$ (where $W_{-1}(\cdot)$ is the Lambert function and $r$ calibrates the communication between servers and users), then they can actually be derived using a tractable Basis Pursuit $\ell_1$-minimization. This newly-revealed connection between distributed computation and compressed sensing opens up the possibility of designing practical distributed computing algorithms by employing tools and methods from compressed sensing.
Internet of Things (IoT) devices will play an important role in emerging applications, since their sensing, actuation, processing, and wireless communication capabilities stimulate data collection, transmission and decision processes of smart applications. However, new challenges arise from the widespread popularity of IoT devices, including the need for processing more complicated data structures and high dimensional data/signals. The unprecedented volume, heterogeneity, and velocity of IoT data calls for a communication paradigm shift from a search for accuracy or fidelity to semantics extraction and goal accomplishment. In this paper, we provide a partial but insightful overview of recent research efforts in this newly formed area of goal-oriented (GO) and semantic communications, focusing on the problem of GO data compression for IoT applications.
This work takes a critical look at the application of conventional machine learning methods to wireless communication problems through the lens of reliability and robustness. Deep learning techniques adopt a frequentist framework, and are known to provide poorly calibrated decisions that do not reproduce the true uncertainty caused by limitations in the size of the training data. Bayesian learning, while in principle capable of addressing this shortcoming, is in practice impaired by model misspecification and by the presence of outliers. Both problems are pervasive in wireless communication settings, in which the capacity of machine learning models is subject to resource constraints and training data is affected by noise and interference. In this context, we explore the application of the framework of robust Bayesian learning. After a tutorial-style introduction to robust Bayesian learning, we showcase the merits of robust Bayesian learning on several important wireless communication problems in terms of accuracy, calibration, and robustness to outliers and misspecification.
Decentralized learning algorithms empower interconnected edge devices to share data and computational resources to collaboratively train a machine learning model without the aid of a central coordinator (e.g. an orchestrating basestation). In the case of heterogeneous data distributions at the network devices, collaboration can yield predictors with unsatisfactory performance for a subset of the devices. For this reason, in this work we consider the formulation of a distributionally robust decentralized learning task and we propose a decentralized single loop gradient descent/ascent algorithm (AD-GDA) to solve the underlying minimax optimization problem. We render our algorithm communication efficient by employing a compressed consensus scheme and we provide convergence guarantees for smooth convex and non-convex loss functions. Finally, we corroborate the theoretical findings with empirical evidence of the ability of the proposed algorithm in providing unbiased predictors over a network of collaborating devices with highly heterogeneous data distributions.
Emerging communication networks are envisioned to support massive wireless connectivity of heterogeneous devices with sporadic traffic and diverse requirements in terms of latency, reliability, and bandwidth. Providing multiple access to an increasing number of uncoordinated users and sharing the limited resources become essential in this context. In this work, we revisit the random access (RA) problem and exploit the continuous angular group sparsity feature of wireless channels to propose a novel RA strategy that provides low latency, high reliability, and massive access with limited bandwidth resources in an all-in-one package. To this end, we first design a reconstruction-free goal-oriented optimization problem, which only preserves the angular information required to identify the active devices. To solve this, we propose an alternating direction method of multipliers (ADMM) and derive closed-form expressions for each ADMM step. Then, we design a clustering algorithm that assigns the users in specific groups from which we can identify active stationary devices by their angles. For mobile devices, we propose an alternating minimization algorithm to recover their data and their channel gains simultaneously, which allows us to identify active mobile users. Simulation results show significant performance gains in terms of active user detection and false alarm probabilities as compared to state-of-the-art RA schemes, even with limited number of preambles. Moreover, unlike prior work, the performance of the proposed blind goal-oriented massive access does not depend on the number of devices.
Affine Frequency Division Multiplexing (AFDM), a new chirp-based multicarrier waveform for high mobility communications, is introduced here. AFDM is based on discrete affine Fourier transform (DAFT), a generalization of discrete Fourier transform, which is characterized by two parameters that can be adapted to better cope with doubly dispersive channels. First, we derive the explicit input-output relation in the DAFT domain showing the effect of AFDM parameters in the input-output relation. Second, we show how the DAFT parameters underlying AFDM have to be set so that the resulting DAFT domain impulse response conveys a full delay-Doppler representation of the channel. Then, we show analytically that AFDM can achieve full diversity in doubly dispersive channels, where full diversity refers to the number of multipath components separable in either the delay or the Doppler domain, due to its full delay-Doppler representation. Furthermore, we present a low complexity detection method taking advantage of zero-padding. We also propose an embedded pilot-aided channel estimation scheme for AFDM, in which both channel estimation and data detection are performed within the same AFDM frame. Finally, simulations corroborate the validity of our analytical results and show the significant performance gains of AFDM over state-of-the-art multicarrier schemes in high mobility scenarios.
Affine Frequency Division Multiplexing (AFDM), which is based on discrete affine Fourier transform (DAFT), has recently been proposed for reliable communication in high-mobility scenarios. Two low complexity detectors for AFDM are introduced here. Approximating the channel matrix as a band matrix via placing null symbols in the AFDM frame in the DAFT domain, a low complexity MMSE detection is proposed by means of the $\rm{LDL}$ factorization. Furthermore, exploiting the sparsity of the channel matrix, we propose a low complexity iterative decision feedback equalizer (DFE) based on weighted maximal ratio combining (MRC), which extracts and combines the received multipath components of the transmitted symbols in the DAFT domain. Simulation results show that the proposed detectors have similar performance, while weighted MRC-based DFE has lower complexity than band-matrix-approximation LMMSE when the channel impulse response has gaps.
Standard Bayesian learning is known to have suboptimal generalization capabilities under model misspecification and in the presence of outliers. PAC-Bayes theory demonstrates that the free energy criterion minimized by Bayesian learning is a bound on the generalization error for Gibbs predictors (i.e., for single models drawn at random from the posterior) under the assumption of sampling distributions uncontaminated by outliers. This viewpoint provides a justification for the limitations of Bayesian learning when the model is misspecified, requiring ensembling, and when data is affected by outliers. In recent work, PAC-Bayes bounds - referred to as PAC$^m$ - were derived to introduce free energy metrics that account for the performance of ensemble predictors, obtaining enhanced performance under misspecification. This work presents a novel robust free energy criterion that combines the generalized logarithm score function with PAC$^m$ ensemble bounds. The proposed free energy training criterion produces predictive distributions that are able to concurrently counteract the detrimental effects of model misspecification and outliers.