Abstract:We revisit the problem of symmetric private information retrieval (SPIR) in settings where the database replication is modeled by a simple graph. Here, each vertex corresponds to a server, and a message is replicated on two servers if and only if there is an edge between them. To satisfy the requirement of database privacy, we let all the servers share some common randomness, independent of the messages. We aim to quantify the improvement in SPIR capacity, i.e., the maximum ratio of the number of desired and downloaded symbols, compared to the setting with graph-replicated common randomness. Towards this, we develop an algorithm to convert a class of PIR schemes into the corresponding SPIR schemes, thereby establishing a capacity lower bound on graphs for which such schemes exist. This includes the class of path and cyclic graphs for which we derive capacity upper bounds that are tighter than the trivial bounds given by the respective PIR capacities. For the special case of path graph with three vertices, we identify the SPIR capacity to be $\frac{1}{2}$.
Abstract:We consider the problem of decentralized constrained optimization with multiple agents $E_1,\ldots,E_N$ who jointly wish to learn the optimal solution set while keeping their feasible sets $\mathcal{P}_1,\ldots,\mathcal{P}_N$ private from each other. We assume that the objective function $f$ is known to all agents and each feasible set is a collection of points from a universal alphabet $\mathcal{P}_{alph}$. A designated agent (leader) starts the communication with the remaining (non-leader) agents, and is the first to retrieve the solution set. The leader searches for the solution by sending queries to and receiving answers from the non-leaders, such that the information on the individual feasible sets revealed to the leader should be no more than nominal, i.e., what is revealed from learning the solution set alone. We develop achievable schemes for obtaining the solution set at nominal information leakage, and characterize their communication costs under two communication setups between agents. In this work, we focus on two kinds of network setups: i) ring, where each agent communicates with two adjacent agents, and ii) star, where only the leader communicates with the remaining agents. We show that, if the leader first learns the joint feasible set through an existing private set intersection (PSI) protocol and then deduces the solution set, the information leaked to the leader is greater than nominal. Moreover, we draw connection of our schemes to threshold PSI (ThPSI), which is a PSI-variant where the intersection is revealed only when its cardinality is larger than a threshold value. Finally, for various realizations of $f$ mapped uniformly at random to a fixed range of values, our schemes are more communication-efficient with a high probability compared to retrieving the entire feasible set through PSI.
Abstract:Integrated Sensing and Communications (ISAC) is a promising technology for future wireless networks, enabling simultaneous communication and sensing using shared resources. This paper investigates the performance of full-duplex (FD) communication in near-field ISAC systems, where spherical-wave propagation introduces unique beam-focusing capabilities. We propose a joint optimization framework for transmit and receive beamforming at the base station to minimize transmit power while satisfying rate constraints for multi-user downlink transmission, multi-user uplink reception, and multi-target sensing. Our approach employs alternating optimization combined with semidefinite relaxation and Rayleigh quotient techniques to address the non-convexity of the problem. Simulation results demonstrate that FD-enabled near-field ISAC achieves superior power efficiency compared to half-duplex and far-field benchmarks, effectively detecting targets at identical angles while meeting communication requirements.
Abstract:We introduce the problem of symmetric private information retrieval (SPIR) on replicated databases modeled by a simple graph. In this model, each vertex corresponds to a server, and a message is replicated on two servers if and only if there is an edge between them. We consider the setting where the server-side common randomness necessary to accomplish SPIR is also replicated at the servers according to the graph, and we call this as message-specific common randomness. In this setting, we establish a lower bound on the SPIR capacity, i.e., the maximum download rate, for general graphs, by proposing an achievable SPIR scheme. Next, we prove that, for any SPIR scheme to be feasible, the minimum size of message-specific randomness should be equal to the size of a message. Finally, by providing matching upper bounds, we derive the exact SPIR capacity for the class of path and regular graphs.
Abstract:Recent advancements in the reasoning capabilities of large language models (LLMs) show that employing group relative policy optimization (GRPO) algorithm for reinforcement learning (RL) training allows the models to use more thinking/reasoning tokens for generating better responses. However, LLMs can generate only a finite amount of tokens while maintaining attention to the previously generated tokens. This limit, also known as the context size of an LLM, is a bottleneck in LLM reasoning with arbitrarily large number of tokens. To think beyond the limit of context size, an LLM must employ a modular thinking strategy to reason over multiple rounds. In this work, we propose $\textbf{MOTIF: Modular Thinking via Reinforcement Finetuning}$ -- an RL training method for generating thinking tokens in multiple rounds, effectively allowing the model to think with additional context size. We trained the open-source model Qwen2.5-3B-Instruct on GSM8K dataset via parameter efficient fine-tuning and tested its accuracy on MATH500 and AIME2024 benchmarks. Our experiments show 3.8\% and 3.3\% improvements over vanilla GRPO based training in the respective benchmarks. Furthermore, this improvement was achieved with only 15\% of samples, thus demonstrating sample efficiency of MOTIF. Our code and models are available at https://github.com/purbeshmitra/MOTIF and https://huggingface.co/purbeshmitra/MOTIF, respectively.




Abstract:In recent literature, when modeling for information freshness in remote estimation settings, estimators have been mainly restricted to the class of martingale estimators, meaning the remote estimate at any time is equal to the most recently received update. This is mainly due to its simplicity and ease of analysis. However, these martingale estimators are far from optimal in some cases, especially in pull-based update systems. For such systems, maximum aposteriori probability (MAP) estimators are optimum, but can be challenging to analyze. Here, we introduce a new class of estimators, called structured estimators, which retain useful characteristics from a MAP estimate while still being analytically tractable. Our proposed estimators move seamlessly from a martingale estimator to a MAP estimator.
Abstract:We consider the problem of timely tracking of a Wiener process via an energy-conserving sensor by utilizing a single bit quantization strategy under periodic sampling. Contrary to conventional single bit quantizers which only utilize the transmitted bit to convey information, in our codebook, we use an additional `$\emptyset$' symbol to encode the event of \emph{not transmitting}. Thus, our quantization functions are composed of three decision regions as opposed to the conventional two regions. First, we propose an optimum quantization method in which the optimum quantization functions are obtained by tracking the distributions of the quantization error. However, this method requires a high computational cost and might not be applicable for energy-conserving sensors. Thus, we propose two additional low complexity methods. In the last-bit aware method, three predefined quantization functions are available to both devices, and they switch the quantization function based on the last transmitted bit. With the Gaussian approximation method, we calculate a single quantization function by assuming that the quantization error can be approximated as Gaussian. While previous methods require a constant delay assumption, this method also works for random delay. We observe that all three methods perform similarly in terms of mean-squared error and transmission cost.


Abstract:We consider a source that shares updates with a network of $n$ gossiping nodes. The network's topology switches between two arbitrary topologies, with switching governed by a two-state continuous time Markov chain (CTMC) process. Information freshness is well-understood for static networks. This work evaluates the impact of time-varying connections on information freshness. In order to quantify the freshness of information, we use the version age of information metric. If the two networks have static long-term average version ages of $f_1(n)$ and $f_2(n)$ with $f_1(n) \ll f_2(n)$, then the version age of the varying-topologies network is related to $f_1(n)$, $f_2(n)$, and the transition rates in the CTMC. If the transition rates in the CTMC are faster than $f_1(n)$, the average version age of the varying-topologies network is $f_1(n)$. Further, we observe that the behavior of a vanishingly small fraction of nodes can severely impact the long-term average version age of a network in a negative way. This motivates the definition of a typical set of nodes in the network. We evaluate the impact of fast and slow CTMC transition rates on the typical set of nodes.




Abstract:The literature is abundant with methodologies focusing on using transformer architectures due to their prominence in wireless signal processing and their capability to capture long-range dependencies via attention mechanisms. In particular, depthwise separable convolutions enhance parameter efficiency for the process of high-dimensional data characteristics of MIMO systems. In this work, we introduce a novel unsupervised deep learning framework that integrates depthwise separable convolutions and transformers to generate beamforming weights under imperfect channel state information (CSI) for a multi-user single-input multiple-output (MU-SIMO) system in dense urban environments. The primary goal is to enhance throughput by maximizing sum-rate while ensuring reliable communication. Spectral efficiency and block error rate (BLER) are considered as performance metrics. Experiments are carried out under various conditions to compare the performance of the proposed NNBF framework against baseline methods zero-forcing beamforming (ZFBF) and minimum mean square error (MMSE) beamforming. Experimental results demonstrate the superiority of the proposed framework over the baseline techniques.




Abstract:We develop a novel source coding strategy for sampling and monitoring of a Wiener process. For the encoding process, we employ a four level ``quantization'' scheme, which employs monotone function thresholds as opposed to fixed constant thresholds. Leveraging the hitting times of the Wiener process with these thresholds, we devise a sampling and encoding strategy which does not incur any quantization errors. We give analytical expressions for the mean squared error (MSE) and find the optimal source code lengths to minimize the MSE under this monotone function threshold scheme, subject to a sampling rate constraint.