Abstract:In recent literature, when modeling for information freshness in remote estimation settings, estimators have been mainly restricted to the class of martingale estimators, meaning the remote estimate at any time is equal to the most recently received update. This is mainly due to its simplicity and ease of analysis. However, these martingale estimators are far from optimal in some cases, especially in pull-based update systems. For such systems, maximum aposteriori probability (MAP) estimators are optimum, but can be challenging to analyze. Here, we introduce a new class of estimators, called structured estimators, which retain useful characteristics from a MAP estimate while still being analytically tractable. Our proposed estimators move seamlessly from a martingale estimator to a MAP estimator.
Abstract:We consider the problem of timely tracking of a Wiener process via an energy-conserving sensor by utilizing a single bit quantization strategy under periodic sampling. Contrary to conventional single bit quantizers which only utilize the transmitted bit to convey information, in our codebook, we use an additional `$\emptyset$' symbol to encode the event of \emph{not transmitting}. Thus, our quantization functions are composed of three decision regions as opposed to the conventional two regions. First, we propose an optimum quantization method in which the optimum quantization functions are obtained by tracking the distributions of the quantization error. However, this method requires a high computational cost and might not be applicable for energy-conserving sensors. Thus, we propose two additional low complexity methods. In the last-bit aware method, three predefined quantization functions are available to both devices, and they switch the quantization function based on the last transmitted bit. With the Gaussian approximation method, we calculate a single quantization function by assuming that the quantization error can be approximated as Gaussian. While previous methods require a constant delay assumption, this method also works for random delay. We observe that all three methods perform similarly in terms of mean-squared error and transmission cost.
Abstract:We consider a source that shares updates with a network of $n$ gossiping nodes. The network's topology switches between two arbitrary topologies, with switching governed by a two-state continuous time Markov chain (CTMC) process. Information freshness is well-understood for static networks. This work evaluates the impact of time-varying connections on information freshness. In order to quantify the freshness of information, we use the version age of information metric. If the two networks have static long-term average version ages of $f_1(n)$ and $f_2(n)$ with $f_1(n) \ll f_2(n)$, then the version age of the varying-topologies network is related to $f_1(n)$, $f_2(n)$, and the transition rates in the CTMC. If the transition rates in the CTMC are faster than $f_1(n)$, the average version age of the varying-topologies network is $f_1(n)$. Further, we observe that the behavior of a vanishingly small fraction of nodes can severely impact the long-term average version age of a network in a negative way. This motivates the definition of a typical set of nodes in the network. We evaluate the impact of fast and slow CTMC transition rates on the typical set of nodes.
Abstract:The literature is abundant with methodologies focusing on using transformer architectures due to their prominence in wireless signal processing and their capability to capture long-range dependencies via attention mechanisms. In particular, depthwise separable convolutions enhance parameter efficiency for the process of high-dimensional data characteristics of MIMO systems. In this work, we introduce a novel unsupervised deep learning framework that integrates depthwise separable convolutions and transformers to generate beamforming weights under imperfect channel state information (CSI) for a multi-user single-input multiple-output (MU-SIMO) system in dense urban environments. The primary goal is to enhance throughput by maximizing sum-rate while ensuring reliable communication. Spectral efficiency and block error rate (BLER) are considered as performance metrics. Experiments are carried out under various conditions to compare the performance of the proposed NNBF framework against baseline methods zero-forcing beamforming (ZFBF) and minimum mean square error (MMSE) beamforming. Experimental results demonstrate the superiority of the proposed framework over the baseline techniques.
Abstract:We develop a novel source coding strategy for sampling and monitoring of a Wiener process. For the encoding process, we employ a four level ``quantization'' scheme, which employs monotone function thresholds as opposed to fixed constant thresholds. Leveraging the hitting times of the Wiener process with these thresholds, we devise a sampling and encoding strategy which does not incur any quantization errors. We give analytical expressions for the mean squared error (MSE) and find the optimal source code lengths to minimize the MSE under this monotone function threshold scheme, subject to a sampling rate constraint.
Abstract:We consider the private information retrieval (PIR) problem for a multigraph-based replication system, where each set of $r$ files is stored on two of the servers according to an underlying $r$-multigraph. Our goal is to establish upper and lower bounds on the PIR capacity of the $r$-multigraph. Specifically, we first propose a construction for multigraph-based PIR systems that leverages the symmetry of the underlying graph-based PIR scheme, deriving a capacity lower bound for such multigraphs. Then, we establish a general upper bound using linear programming, expressed as a function of the underlying graph parameters. Our bounds are demonstrated to be tight for PIR systems on multipaths for even number of vertices.
Abstract:We consider the problem of finding the asymptotic capacity of symmetric private information retrieval (SPIR) with $B$ Byzantine servers. Prior to finding the capacity, a definition for the Byzantine servers is needed since in the literature there are two different definitions. In \cite{byzantine_tpir}, where it was first defined, the Byzantine servers can send any symbol from the storage, their received queries and some independent random symbols. In \cite{unresponsive_byzantine_1}, Byzantine servers send any random symbol independently of their storage and queries. It is clear that these definitions are not identical, especially when \emph{symmetric} privacy is required. To that end, we define Byzantine servers, inspired by \cite{byzantine_tpir}, as the servers that can share everything, before and after the scheme initiation. In this setting, we find an upper bound, for an infinite number of messages case, that should be satisfied for all schemes that protect against this setting and develop a scheme that achieves this upper bound. Hence, we identify the capacity of the problem.
Abstract:We study a linear computation problem over a quantum multiple access channel (LC-QMAC), where $S$ servers share an entangled state and separately store classical data streams $W_1,\cdots, W_S$ over a finite field $\mathbb{F}_d$. A user aims to compute $K$ linear combinations of these data streams, represented as $Y = \mathbf{V}_1 W_1 + \mathbf{V}_2 W_2 + \cdots + \mathbf{V}_S W_S \in \mathbb{F}_d^{K \times 1}$. To this end, each server encodes its classical information into its local quantum subsystem and transmits it to the user, who retrieves the desired computations via quantum measurements. In this work, we propose an achievable scheme for LC-QMAC based on the stabilizer formalism and the ideas from entanglement-assisted quantum error-correcting codes (EAQECC). Specifically, given any linear computation matrix, we construct a self-orthogonal matrix that can be implemented using the stabilizer formalism. Also, we apply precoding matrices to minimize the number of auxiliary qudits required. Our scheme achieves more computations per qudit, i.e., a higher computation rate, compared to the best-known methods in the literature, and attains the capacity in certain cases.
Abstract:Retrieval-augmented generation (RAG) enhances large language models (LLMs) by incorporating external knowledge to generate a response within a context with improved accuracy and reduced hallucinations. However, multi-modal RAG systems face unique challenges: (i) the retrieval process may select irrelevant entries to user query (e.g., images, documents), and (ii) vision-language models or multi-modal language models like GPT-4o may hallucinate when processing these entries to generate RAG output. In this paper, we aim to address the first challenge, i.e, improving the selection of relevant context from the knowledge-base in retrieval phase of the multi-modal RAG. Specifically, we leverage the relevancy score (RS) measure designed in our previous work for evaluating the RAG performance to select more relevant entries in retrieval process. The retrieval based on embeddings, say CLIP-based embedding, and cosine similarity usually perform poorly particularly for multi-modal data. We show that by using a more advanced relevancy measure, one can enhance the retrieval process by selecting more relevant pieces from the knowledge-base and eliminate the irrelevant pieces from the context by adaptively selecting up-to-$k$ entries instead of fixed number of entries. Our evaluation using COCO dataset demonstrates significant enhancement in selecting relevant context and accuracy of the generated response.
Abstract:Retrieval-augmented generation (RAG) improves large language models (LLMs) by using external knowledge to guide response generation, reducing hallucinations. However, RAG, particularly multi-modal RAG, can introduce new hallucination sources: (i) the retrieval process may select irrelevant pieces (e.g., documents, images) as raw context from the database, and (ii) retrieved images are processed into text-based context via vision-language models (VLMs) or directly used by multi-modal language models (MLLMs) like GPT-4o, which may hallucinate. To address this, we propose a novel framework to evaluate the reliability of multi-modal RAG using two performance measures: (i) the relevancy score (RS), assessing the relevance of retrieved entries to the query, and (ii) the correctness score (CS), evaluating the accuracy of the generated response. We train RS and CS models using a ChatGPT-derived database and human evaluator samples. Results show that both models achieve ~88% accuracy on test data. Additionally, we construct a 5000-sample human-annotated database evaluating the relevancy of retrieved pieces and the correctness of response statements. Our RS model aligns with human preferences 20% more often than CLIP in retrieval, and our CS model matches human preferences ~91% of the time. Finally, we assess various RAG systems' selection and generation performances using RS and CS.