Alert button
Picture for Devavrat Shah

Devavrat Shah

Alert button

On Computationally Efficient Learning of Exponential Family Distributions

Sep 12, 2023
Abhin Shah, Devavrat Shah, Gregory W. Wornell

We consider the classical problem of learning, with arbitrary accuracy, the natural parameters of a $k$-parameter truncated \textit{minimal} exponential family from i.i.d. samples in a computationally and statistically efficient manner. We focus on the setting where the support as well as the natural parameters are appropriately bounded. While the traditional maximum likelihood estimator for this class of exponential family is consistent, asymptotically normal, and asymptotically efficient, evaluating it is computationally hard. In this work, we propose a novel loss function and a computationally efficient estimator that is consistent as well as asymptotically normal under mild conditions. We show that, at the population level, our method can be viewed as the maximum likelihood estimation of a re-parameterized distribution belonging to the same class of exponential family. Further, we show that our estimator can be interpreted as a solution to minimizing a particular Bregman score as well as an instance of minimizing the \textit{surrogate} likelihood. We also provide finite sample guarantees to achieve an error (in $\ell_2$-norm) of $\alpha$ in the parameter estimation with sample complexity $O({\sf poly}(k)/\alpha^2)$. Our method achives the order-optimal sample complexity of $O({\sf log}(k)/\alpha^2)$ when tailored for node-wise-sparse Markov random fields. Finally, we demonstrate the performance of our estimator via numerical experiments.

* An earlier version of this work arXiv:2110.15397 was presented at the Neural Information Processing Systems Conference in December 2021 titled "A Computationally Efficient Method for Learning Exponential Family Distributions" 
Viaarxiv icon

Exploiting Observation Bias to Improve Matrix Completion

Jun 07, 2023
Sean Mann, Charlotte Park, Devavrat Shah

Figure 1 for Exploiting Observation Bias to Improve Matrix Completion
Figure 2 for Exploiting Observation Bias to Improve Matrix Completion
Figure 3 for Exploiting Observation Bias to Improve Matrix Completion
Figure 4 for Exploiting Observation Bias to Improve Matrix Completion

We consider a variant of matrix completion where entries are revealed in a biased manner, adopting a model akin to that introduced by Ma and Chen. Instead of treating this observation bias as a disadvantage, as is typically the case, our goal is to exploit the shared information between the bias and the outcome of interest to improve predictions. Towards this, we propose a simple two-stage algorithm: (i) interpreting the observation pattern as a fully observed noisy matrix, we apply traditional matrix completion methods to the observation pattern to estimate the distances between the latent factors; (ii) we apply supervised learning on the recovered features to impute missing observations. We establish finite-sample error rates that are competitive with the corresponding supervised learning parametric rates, suggesting that our learning performance is comparable to having access to the unobserved covariates. Empirical evaluation using a real-world dataset reflects similar performance gains, with our algorithm's estimates having 30x smaller mean squared error compared to traditional matrix completion methods.

Viaarxiv icon

Auditing for Human Expertise

Jun 02, 2023
Rohan Alur, Loren Laine, Darrick K. Li, Manish Raghavan, Devavrat Shah, Dennis Shung

Figure 1 for Auditing for Human Expertise
Figure 2 for Auditing for Human Expertise
Figure 3 for Auditing for Human Expertise
Figure 4 for Auditing for Human Expertise

High-stakes prediction tasks (e.g., patient diagnosis) are often handled by trained human experts. A common source of concern about automation in these settings is that experts may exercise intuition that is difficult to model and/or have access to information (e.g., conversations with a patient) that is simply unavailable to a would-be algorithm. This raises a natural question whether human experts add value which could not be captured by an algorithmic predictor. We develop a statistical framework under which we can pose this question as a natural hypothesis test. Indeed, as our framework highlights, detecting human expertise is more subtle than simply comparing the accuracy of expert predictions to those made by a particular learning algorithm. Instead, we propose a simple procedure which tests whether expert predictions are statistically independent from the outcomes of interest after conditioning on the available inputs (`features'). A rejection of our test thus suggests that human experts may add value to any algorithm trained on the available data, and has direct implications for whether human-AI `complementarity' is achievable in a given prediction task. We highlight the utility of our procedure using admissions data collected from the emergency department of a large academic hospital system, where we show that physicians' admit/discharge decisions for patients with acute gastrointestinal bleeding (AGIB) appear to be incorporating information not captured in a standard algorithmic screening tool. This is despite the fact that the screening tool is arguably more accurate than physicians' discretionary decisions, highlighting that -- even absent normative concerns about accountability or interpretability -- accuracy is insufficient to justify algorithmic automation.

* 27 pages, 8 figures 
Viaarxiv icon

SAMoSSA: Multivariate Singular Spectrum Analysis with Stochastic Autoregressive Noise

May 25, 2023
Abdullah Alomar, Munther Dahleh, Sean Mann, Devavrat Shah

Figure 1 for SAMoSSA: Multivariate Singular Spectrum Analysis with Stochastic Autoregressive Noise
Figure 2 for SAMoSSA: Multivariate Singular Spectrum Analysis with Stochastic Autoregressive Noise
Figure 3 for SAMoSSA: Multivariate Singular Spectrum Analysis with Stochastic Autoregressive Noise
Figure 4 for SAMoSSA: Multivariate Singular Spectrum Analysis with Stochastic Autoregressive Noise

The well-established practice of time series analysis involves estimating deterministic, non-stationary trend and seasonality components followed by learning the residual stochastic, stationary components. Recently, it has been shown that one can learn the deterministic non-stationary components accurately using multivariate Singular Spectrum Analysis (mSSA) in the absence of a correlated stationary component; meanwhile, in the absence of deterministic non-stationary components, the Autoregressive (AR) stationary component can also be learnt readily, e.g. via Ordinary Least Squares (OLS). However, a theoretical underpinning of multi-stage learning algorithms involving both deterministic and stationary components has been absent in the literature despite its pervasiveness. We resolve this open question by establishing desirable theoretical guarantees for a natural two-stage algorithm, where mSSA is first applied to estimate the non-stationary components despite the presence of a correlated stationary AR component, which is subsequently learned from the residual time series. We provide a finite-sample forecasting consistency bound for the proposed algorithm, SAMoSSA, which is data-driven and thus requires minimal parameter tuning. To establish theoretical guarantees, we overcome three hurdles: (i) we characterize the spectra of Page matrices of stable AR processes, thus extending the analysis of mSSA; (ii) we extend the analysis of AR process identification in the presence of arbitrary bounded perturbations; (iii) we characterize the out-of-sample or forecasting error, as opposed to solely considering model identification. Through representative empirical studies, we validate the superior performance of SAMoSSA compared to existing baselines. Notably, SAMoSSA's ability to account for AR noise structure yields improvements ranging from 5% to 37% across various benchmark datasets.

Viaarxiv icon

A User-Driven Framework for Regulating and Auditing Social Media

Apr 20, 2023
Sarah H. Cen, Aleksander Madry, Devavrat Shah

Figure 1 for A User-Driven Framework for Regulating and Auditing Social Media
Figure 2 for A User-Driven Framework for Regulating and Auditing Social Media
Figure 3 for A User-Driven Framework for Regulating and Auditing Social Media
Figure 4 for A User-Driven Framework for Regulating and Auditing Social Media

People form judgments and make decisions based on the information that they observe. A growing portion of that information is not only provided, but carefully curated by social media platforms. Although lawmakers largely agree that platforms should not operate without any oversight, there is little consensus on how to regulate social media. There is consensus, however, that creating a strict, global standard of "acceptable" content is untenable (e.g., in the US, it is incompatible with Section 230 of the Communications Decency Act and the First Amendment). In this work, we propose that algorithmic filtering should be regulated with respect to a flexible, user-driven baseline. We provide a concrete framework for regulating and auditing a social media platform according to such a baseline. In particular, we introduce the notion of a baseline feed: the content that a user would see without filtering (e.g., on Twitter, this could be the chronological timeline). We require that the feeds a platform filters contain "similar" informational content as their respective baseline feeds, and we design a principled way to measure similarity. This approach is motivated by related suggestions that regulations should increase user agency. We present an auditing procedure that checks whether a platform honors this requirement. Notably, the audit needs only black-box access to a platform's filtering algorithm, and it does not access or infer private user information. We provide theoretical guarantees on the strength of the audit. We further show that requiring closeness between filtered and baseline feeds does not impose a large performance cost, nor does it create echo chambers.

* 21 pages, 4 figures 
Viaarxiv icon

Counterfactual Identifiability of Bijective Causal Models

Feb 04, 2023
Arash Nasr-Esfahany, Mohammad Alizadeh, Devavrat Shah

Figure 1 for Counterfactual Identifiability of Bijective Causal Models
Figure 2 for Counterfactual Identifiability of Bijective Causal Models
Figure 3 for Counterfactual Identifiability of Bijective Causal Models
Figure 4 for Counterfactual Identifiability of Bijective Causal Models

We study counterfactual identifiability in causal models with bijective generation mechanisms (BGM), a class that generalizes several widely-used causal models in the literature. We establish their counterfactual identifiability for three common causal structures with unobserved confounding, and propose a practical learning method that casts learning a BGM as structured generative modeling. Learned BGMs enable efficient counterfactual estimation and can be obtained using a variety of deep conditional generative models. We evaluate our techniques in a visual task and demonstrate its application in a real-world video streaming simulation task.

Viaarxiv icon

Matrix Estimation for Individual Fairness

Feb 04, 2023
Cindy Y. Zhang, Sarah H. Cen, Devavrat Shah

Figure 1 for Matrix Estimation for Individual Fairness
Figure 2 for Matrix Estimation for Individual Fairness
Figure 3 for Matrix Estimation for Individual Fairness
Figure 4 for Matrix Estimation for Individual Fairness

In recent years, multiple notions of algorithmic fairness have arisen. One such notion is individual fairness (IF), which requires that individuals who are similar receive similar treatment. In parallel, matrix estimation (ME) has emerged as a natural paradigm for handling noisy data with missing values. In this work, we connect the two concepts. We show that pre-processing data using ME can improve an algorithm's IF without sacrificing performance. Specifically, we show that using a popular ME method known as singular value thresholding (SVT) to pre-process the data provides a strong IF guarantee under appropriate conditions. We then show that, under analogous conditions, SVT pre-processing also yields estimates that are consistent and approximately minimax optimal. As such, the ME pre-processing step does not, under the stated conditions, increase the prediction error of the base algorithm, i.e., does not impose a fairness-performance trade-off. We verify these results on synthetic and real data.

* 22 pages, 3 figures 
Viaarxiv icon

Doubly robust nearest neighbors in factor models

Nov 28, 2022
Raaz Dwivedi, Katherine Tian, Sabina Tomkins, Predrag Klasnja, Susan Murphy, Devavrat Shah

In this technical note, we introduce an improved variant of nearest neighbors for counterfactual inference in panel data settings where multiple units are assigned multiple treatments over multiple time points, each sampled with constant probabilities. We call this estimator a doubly robust nearest neighbor estimator and provide a high probability non-asymptotic error bound for the mean parameter corresponding to each unit at each time. Our guarantee shows that the doubly robust estimator provides a (near-)quadratic improvement in the error compared to nearest neighbor estimators analyzed in prior work for these settings.

Viaarxiv icon

On counterfactual inference with unobserved confounding

Nov 14, 2022
Abhin Shah, Raaz Dwivedi, Devavrat Shah, Gregory W. Wornell

Figure 1 for On counterfactual inference with unobserved confounding
Figure 2 for On counterfactual inference with unobserved confounding
Figure 3 for On counterfactual inference with unobserved confounding

Given an observational study with $n$ independent but heterogeneous units and one $p$-dimensional sample per unit containing covariates, interventions, and outcomes, our goal is to learn the counterfactual distribution for each unit. We consider studies with unobserved confounding which introduces statistical biases between interventions and outcomes as well as exacerbates the heterogeneity across units. Modeling the underlying joint distribution as an exponential family and under suitable conditions, we reduce learning the $n$ unit-level counterfactual distributions to learning $n$ exponential family distributions with heterogeneous parameters and only one sample per distribution. We introduce a convex objective that pools all $n$ samples to jointly learn all $n$ parameters and provide a unit-wise mean squared error bound that scales linearly with the metric entropy of the parameter space. For example, when the parameters are $s$-sparse linear combination of $k$ known vectors, the error is $O(s\log k/p)$. En route, we derive sufficient conditions for compactly supported distributions to satisfy the logarithmic Sobolev inequality.

Viaarxiv icon

Network Synthetic Interventions: A Framework for Panel Data with Network Interference

Oct 20, 2022
Anish Agarwal, Sarah Cen, Devavrat Shah, Christina Lee Yu

Figure 1 for Network Synthetic Interventions: A Framework for Panel Data with Network Interference
Figure 2 for Network Synthetic Interventions: A Framework for Panel Data with Network Interference
Figure 3 for Network Synthetic Interventions: A Framework for Panel Data with Network Interference

We propose a generalization of the synthetic controls and synthetic interventions methodology to incorporate network interference. We consider the estimation of unit-specific treatment effects from panel data where there are spillover effects across units and in the presence of unobserved confounding. Key to our approach is a novel latent factor model that takes into account network interference and generalizes the factor models typically used in panel data settings. We propose an estimator, "network synthetic interventions", and show that it consistently estimates the mean outcomes for a unit under an arbitrary sequence of treatments for itself and its neighborhood, given certain observation patterns hold in the data. We corroborate our theoretical findings with simulations.

Viaarxiv icon