Alert button
Picture for Shiva Prasad Kasiviswanathan

Shiva Prasad Kasiviswanathan

Alert button

Differentially Private Conditional Independence Testing

Jun 11, 2023
Iden Kalemaj, Shiva Prasad Kasiviswanathan, Aaditya Ramdas

Figure 1 for Differentially Private Conditional Independence Testing
Figure 2 for Differentially Private Conditional Independence Testing
Figure 3 for Differentially Private Conditional Independence Testing
Figure 4 for Differentially Private Conditional Independence Testing

Conditional independence (CI) tests are widely used in statistical data analysis, e.g., they are the building block of many algorithms for causal graph discovery. The goal of a CI test is to accept or reject the null hypothesis that $X \perp \!\!\! \perp Y \mid Z$, where $X \in \mathbb{R}, Y \in \mathbb{R}, Z \in \mathbb{R}^d$. In this work, we investigate conditional independence testing under the constraint of differential privacy. We design two private CI testing procedures: one based on the generalized covariance measure of Shah and Peters (2020) and another based on the conditional randomization test of Cand\`es et al. (2016) (under the model-X assumption). We provide theoretical guarantees on the performance of our tests and validate them empirically. These are the first private CI tests that work for the general case when $Z$ is continuous.

Viaarxiv icon

Debiasing Conditional Stochastic Optimization

Apr 20, 2023
Lie He, Shiva Prasad Kasiviswanathan

In this paper, we study the conditional stochastic optimization (CSO) problem which covers a variety of applications including portfolio selection, reinforcement learning, robust learning, causal inference, etc. The sample-averaged gradient of the CSO objective is biased due to its nested structure and therefore requires a high sample complexity to reach convergence. We introduce a general stochastic extrapolation technique that effectively reduces the bias. We show that for nonconvex smooth objectives, combining this extrapolation with variance reduction techniques can achieve a significantly better sample complexity than existing bounds. We also develop new algorithms for the finite-sum variant of CSO that also significantly improve upon existing results. Finally, we believe that our debiasing technique could be an interesting tool applicable to other stochastic optimization problems too.

Viaarxiv icon

Interventional and Counterfactual Inference with Diffusion Models

Feb 02, 2023
Patrick Chao, Patrick Blöbaum, Shiva Prasad Kasiviswanathan

Figure 1 for Interventional and Counterfactual Inference with Diffusion Models
Figure 2 for Interventional and Counterfactual Inference with Diffusion Models
Figure 3 for Interventional and Counterfactual Inference with Diffusion Models
Figure 4 for Interventional and Counterfactual Inference with Diffusion Models

We consider the problem of answering observational, interventional, and counterfactual queries in a causally sufficient setting where only observational data and the causal graph are available. Utilizing the recent developments in diffusion models, we introduce diffusion-based causal models (DCM) to learn causal mechanisms, that generate unique latent encodings to allow for direct sampling under interventions as well as abduction for counterfactuals. We utilize DCM to model structural equations, seeing that diffusion models serve as a natural candidate here since they encode each node to a latent representation, a proxy for the exogenous noise, and offer flexible and accurate modeling to provide reliable causal statements and estimates. Our empirical evaluations demonstrate significant improvements over existing state-of-the-art methods for answering causal queries. Our theoretical results provide a methodology for analyzing the counterfactual error for general encoder/decoder models which could be of independent interest.

Viaarxiv icon

Thompson Sampling with Diffusion Generative Prior

Jan 12, 2023
Yu-Guan Hsieh, Shiva Prasad Kasiviswanathan, Branislav Kveton, Patrick Blöbaum

Figure 1 for Thompson Sampling with Diffusion Generative Prior
Figure 2 for Thompson Sampling with Diffusion Generative Prior
Figure 3 for Thompson Sampling with Diffusion Generative Prior
Figure 4 for Thompson Sampling with Diffusion Generative Prior

In this work, we initiate the idea of using denoising diffusion models to learn priors for online decision making problems. Our special focus is on the meta-learning for bandit framework, with the goal of learning a strategy that performs well across bandit tasks of a same class. To this end, we train a diffusion model that learns the underlying task distribution and combine Thompson sampling with the learned prior to deal with new tasks at test time. Our posterior sampling algorithm is designed to carefully balance between the learned prior and the noisy observations that come from the learner's interaction with the environment. To capture realistic bandit scenarios, we also propose a novel diffusion model training procedure that trains even from incomplete and/or noisy data, which could be of independent interest. Finally, our extensive experimental evaluations clearly demonstrate the potential of the proposed approach.

Viaarxiv icon

Sequential Kernelized Independence Testing

Dec 14, 2022
Aleksandr Podkopaev, Patrick Blöbaum, Shiva Prasad Kasiviswanathan, Aaditya Ramdas

Figure 1 for Sequential Kernelized Independence Testing
Figure 2 for Sequential Kernelized Independence Testing
Figure 3 for Sequential Kernelized Independence Testing
Figure 4 for Sequential Kernelized Independence Testing

Independence testing is a fundamental and classical statistical problem that has been extensively studied in the batch setting when one fixes the sample size before collecting data. However, practitioners often prefer procedures that adapt to the complexity of a problem at hand instead of setting sample size in advance. Ideally, such procedures should (a) allow stopping earlier on easy tasks (and later on harder tasks), hence making better use of available resources, and (b) continuously monitor the data and efficiently incorporate statistical evidence after collecting new data, while controlling the false alarm rate. It is well known that classical batch tests are not tailored for streaming data settings, since valid inference after data peeking requires correcting for multiple testing, but such corrections generally result in low power. In this paper, we design sequential kernelized independence tests (SKITs) that overcome such shortcomings based on the principle of testing by betting. We exemplify our broad framework using bets inspired by kernelized dependence measures such as the Hilbert-Schmidt independence criterion (HSIC) and the constrained-covariance criterion (COCO). Importantly, we also generalize the framework to non-i.i.d. time-varying settings, for which there exist no batch tests. We demonstrate the power of our approaches on both simulated and real data.

Viaarxiv icon

Uplifting Bandits

Jun 08, 2022
Yu-Guan Hsieh, Shiva Prasad Kasiviswanathan, Branislav Kveton

Figure 1 for Uplifting Bandits
Figure 2 for Uplifting Bandits
Figure 3 for Uplifting Bandits
Figure 4 for Uplifting Bandits

We introduce a multi-armed bandit model where the reward is a sum of multiple random variables, and each action only alters the distributions of some of them. After each action, the agent observes the realizations of all the variables. This model is motivated by marketing campaigns and recommender systems, where the variables represent outcomes on individual customers, such as clicks. We propose UCB-style algorithms that estimate the uplifts of the actions over a baseline. We study multiple variants of the problem, including when the baseline and affected variables are unknown, and prove sublinear regret bounds for all of these. We also provide lower bounds that justify the necessity of our modeling assumptions. Experiments on synthetic and real-world datasets show the benefit of methods that estimate the uplifts over policies that do not use this structure.

Viaarxiv icon

On Codomain Separability and Label Inference from (Noisy) Loss Functions

Jul 07, 2021
Abhinav Aggarwal, Shiva Prasad Kasiviswanathan, Zekun Xu, Oluwaseyi Feyisetan, Nathanael Teissier

Figure 1 for On Codomain Separability and Label Inference from (Noisy) Loss Functions
Figure 2 for On Codomain Separability and Label Inference from (Noisy) Loss Functions
Figure 3 for On Codomain Separability and Label Inference from (Noisy) Loss Functions
Figure 4 for On Codomain Separability and Label Inference from (Noisy) Loss Functions

Machine learning classifiers rely on loss functions for performance evaluation, often on a private (hidden) dataset. Label inference was recently introduced as the problem of reconstructing the ground truth labels of this private dataset from just the (possibly perturbed) loss function values evaluated at chosen prediction vectors, without any other access to the hidden dataset. Existing results have demonstrated this inference is possible on specific loss functions like the cross-entropy loss. In this paper, we introduce the notion of codomain separability to formally study the necessary and sufficient conditions under which label inference is possible from any (noisy) loss function values. Using this notion, we show that for many commonly used loss functions, including multiclass cross-entropy with common activation functions and some Bregman divergence-based losses, it is possible to design label inference attacks for arbitrary noise levels. We demonstrate that these attacks can also be carried out through actual neural network models, and argue, both formally and empirically, the role of finite precision arithmetic in this setting.

Viaarxiv icon

Collaborative Causal Discovery with Atomic Interventions

Jun 06, 2021
Raghavendra Addanki, Shiva Prasad Kasiviswanathan

Figure 1 for Collaborative Causal Discovery with Atomic Interventions
Figure 2 for Collaborative Causal Discovery with Atomic Interventions
Figure 3 for Collaborative Causal Discovery with Atomic Interventions
Figure 4 for Collaborative Causal Discovery with Atomic Interventions

We introduce a new Collaborative Causal Discovery problem, through which we model a common scenario in which we have multiple independent entities each with their own causal graph, and the goal is to simultaneously learn all these causal graphs. We study this problem without the causal sufficiency assumption, using Maximal Ancestral Graphs (MAG) to model the causal graphs, and assuming that we have the ability to actively perform independent single vertex (or atomic) interventions on the entities. If the $M$ underlying (unknown) causal graphs of the entities satisfy a natural notion of clustering, we give algorithms that leverage this property and recovers all the causal graphs using roughly logarithmic in $M$ number of atomic interventions per entity. These are significantly fewer than $n$ atomic interventions per entity required to learn each causal graph separately, where $n$ is the number of observable nodes in the causal graph. We complement our results with a lower bound and discuss various extensions of our collaborative setting.

Viaarxiv icon

Label Inference Attacks from Log-loss Scores

May 18, 2021
Abhinav Aggarwal, Shiva Prasad Kasiviswanathan, Zekun Xu, Oluwaseyi Feyisetan, Nathanael Teissier

Figure 1 for Label Inference Attacks from Log-loss Scores
Figure 2 for Label Inference Attacks from Log-loss Scores
Figure 3 for Label Inference Attacks from Log-loss Scores
Figure 4 for Label Inference Attacks from Log-loss Scores

Log-loss (also known as cross-entropy loss) metric is ubiquitously used across machine learning applications to assess the performance of classification algorithms. In this paper, we investigate the problem of inferring the labels of a dataset from single (or multiple) log-loss score(s), without any other access to the dataset. Surprisingly, we show that for any finite number of label classes, it is possible to accurately infer the labels of the dataset from the reported log-loss score of a single carefully constructed prediction vector if we allow arbitrary precision arithmetic. Additionally, we present label inference algorithms (attacks) that succeed even under addition of noise to the log-loss scores and under limited precision arithmetic. All our algorithms rely on ideas from number theory and combinatorics and require no model training. We run experimental simulations on some real datasets to demonstrate the ease of running these attacks in practice.

* Accepted to ICML 2021 
Viaarxiv icon

Efficient Intervention Design for Causal Discovery with Latents

May 24, 2020
Raghavendra Addanki, Shiva Prasad Kasiviswanathan, Andrew McGregor, Cameron Musco

Figure 1 for Efficient Intervention Design for Causal Discovery with Latents
Figure 2 for Efficient Intervention Design for Causal Discovery with Latents

We consider recovering a causal graph in presence of latent variables, where we seek to minimize the cost of interventions used in the recovery process. We consider two intervention cost models: (1) a linear cost model where the cost of an intervention on a subset of variables has a linear form, and (2) an identity cost model where the cost of an intervention is the same, regardless of what variables it is on, i.e., the goal is just to minimize the number of interventions. Under the linear cost model, we give an algorithm to identify the ancestral relations of the underlying causal graph, achieving within a $2$-factor of the optimal intervention cost. This approximation factor can be improved to $1+\epsilon$ for any $\epsilon > 0$ under some mild restrictions. Under the identity cost model, we bound the number of interventions needed to recover the entire causal graph, including the latent variables, using a parameterization of the causal graph through a special type of colliders. In particular, we introduce the notion of $p$-colliders, that are colliders between pair of nodes arising from a specific type of conditioning in the causal graph, and provide an upper bound on the number of interventions as a function of the maximum number of $p$-colliders between any two nodes in the causal graph.

Viaarxiv icon