Alert button
Picture for Maggie Makar

Maggie Makar

Alert button

Partial identification of kernel based two sample tests with mismeasured data

Aug 07, 2023
Ron Nafshi, Maggie Makar

Nonparametric two-sample tests such as the Maximum Mean Discrepancy (MMD) are often used to detect differences between two distributions in machine learning applications. However, the majority of existing literature assumes that error-free samples from the two distributions of interest are available.We relax this assumption and study the estimation of the MMD under $\epsilon$-contamination, where a possibly non-random $\epsilon$ proportion of one distribution is erroneously grouped with the other. We show that under $\epsilon$-contamination, the typical estimate of the MMD is unreliable. Instead, we study partial identification of the MMD, and characterize sharp upper and lower bounds that contain the true, unknown MMD. We propose a method to estimate these bounds, and show that it gives estimates that converge to the sharpest possible bounds on the MMD as sample size increases, with a convergence rate that is faster than alternative approaches. Using three datasets, we empirically validate that our approach is superior to the alternatives: it gives tight bounds with a low false coverage rate.

Viaarxiv icon

Multi-Similarity Contrastive Learning

Jul 06, 2023
Emily Mu, John Guttag, Maggie Makar

Figure 1 for Multi-Similarity Contrastive Learning
Figure 2 for Multi-Similarity Contrastive Learning
Figure 3 for Multi-Similarity Contrastive Learning
Figure 4 for Multi-Similarity Contrastive Learning

Given a similarity metric, contrastive methods learn a representation in which examples that are similar are pushed together and examples that are dissimilar are pulled apart. Contrastive learning techniques have been utilized extensively to learn representations for tasks ranging from image classification to caption generation. However, existing contrastive learning approaches can fail to generalize because they do not take into account the possibility of different similarity relations. In this paper, we propose a novel multi-similarity contrastive loss (MSCon), that learns generalizable embeddings by jointly utilizing supervision from multiple metrics of similarity. Our method automatically learns contrastive similarity weightings based on the uncertainty in the corresponding similarity, down-weighting uncertain tasks and leading to better out-of-domain generalization to new tasks. We show empirically that networks trained with MSCon outperform state-of-the-art baselines on in-domain and out-of-domain settings.

Viaarxiv icon

Leveraging Factored Action Spaces for Efficient Offline Reinforcement Learning in Healthcare

May 02, 2023
Shengpu Tang, Maggie Makar, Michael W. Sjoding, Finale Doshi-Velez, Jenna Wiens

Figure 1 for Leveraging Factored Action Spaces for Efficient Offline Reinforcement Learning in Healthcare
Figure 2 for Leveraging Factored Action Spaces for Efficient Offline Reinforcement Learning in Healthcare
Figure 3 for Leveraging Factored Action Spaces for Efficient Offline Reinforcement Learning in Healthcare
Figure 4 for Leveraging Factored Action Spaces for Efficient Offline Reinforcement Learning in Healthcare

Many reinforcement learning (RL) applications have combinatorial action spaces, where each action is a composition of sub-actions. A standard RL approach ignores this inherent factorization structure, resulting in a potential failure to make meaningful inferences about rarely observed sub-action combinations; this is particularly problematic for offline settings, where data may be limited. In this work, we propose a form of linear Q-function decomposition induced by factored action spaces. We study the theoretical properties of our approach, identifying scenarios where it is guaranteed to lead to zero bias when used to approximate the Q-function. Outside the regimes with theoretical guarantees, we show that our approach can still be useful because it leads to better sample efficiency without necessarily sacrificing policy optimality, allowing us to achieve a better bias-variance trade-off. Across several offline RL problems using simulators and real-world datasets motivated by healthcare, we demonstrate that incorporating factored action spaces into value-based RL can result in better-performing policies. Our approach can help an agent make more accurate inferences within underexplored regions of the state-action space when applying RL to observational datasets.

* 30 pages, 18 figures, 2 tables. NeurIPS 2022. Code available at https://github.com/MLD3/OfflineRL_FactoredActions 
Viaarxiv icon

Fairness and robustness in anti-causal prediction

Sep 20, 2022
Maggie Makar, Alexander D'Amour

Figure 1 for Fairness and robustness in anti-causal prediction
Figure 2 for Fairness and robustness in anti-causal prediction
Figure 3 for Fairness and robustness in anti-causal prediction
Figure 4 for Fairness and robustness in anti-causal prediction

Robustness to distribution shift and fairness have independently emerged as two important desiderata required of modern machine learning models. While these two desiderata seem related, the connection between them is often unclear in practice. Here, we discuss these connections through a causal lens, focusing on anti-causal prediction tasks, where the input to a classifier (e.g., an image) is assumed to be generated as a function of the target label and the protected attribute. By taking this perspective, we draw explicit connections between a common fairness criterion - separation - and a common notion of robustness - risk invariance. These connections provide new motivation for applying the separation criterion in anticausal settings, and inform old discussions regarding fairness-performance tradeoffs. In addition, our findings suggest that robustness-motivated approaches can be used to enforce separation, and that they often work better in practice than methods designed to directly enforce separation. Using a medical dataset, we empirically validate our findings on the task of detecting pneumonia from X-rays, in a setting where differences in prevalence across sex groups motivates a fairness mitigation. Our findings highlight the importance of considering causal structure when choosing and enforcing fairness criteria.

Viaarxiv icon

Causally-motivated Shortcut Removal Using Auxiliary Labels

Jun 03, 2021
Maggie Makar, Ben Packer, Dan Moldovan, Davis Blalock, Yoni Halpern, Alexander D'Amour

Figure 1 for Causally-motivated Shortcut Removal Using Auxiliary Labels
Figure 2 for Causally-motivated Shortcut Removal Using Auxiliary Labels
Figure 3 for Causally-motivated Shortcut Removal Using Auxiliary Labels
Figure 4 for Causally-motivated Shortcut Removal Using Auxiliary Labels

Robustness to certain forms of distribution shift is a key concern in many ML applications. Often, robustness can be formulated as enforcing invariances to particular interventions on the data generating process. Here, we study a flexible, causally-motivated approach to enforcing such invariances, paying special attention to shortcut learning, where a robust predictor can achieve optimal i.i.d generalization in principle, but instead it relies on spurious correlations or shortcuts in practice. Our approach uses auxiliary labels, typically available at training time, to enforce conditional independences between the latent factors that determine these labels. We show both theoretically and empirically that causally-motivated regularization schemes (a) lead to more robust estimators that generalize well under distribution shift, and (b) have better finite sample efficiency compared to usual regularization schemes, even in the absence of distribution shifts. Our analysis highlights important theoretical properties of training techniques commonly used in causal inference, fairness, and disentanglement literature.

Viaarxiv icon

Estimation of Utility-Maximizing Bounds on Potential Outcomes

Oct 10, 2019
Maggie Makar, Fredrik D. Johansson, John Guttag, David Sontag

Figure 1 for Estimation of Utility-Maximizing Bounds on Potential Outcomes
Figure 2 for Estimation of Utility-Maximizing Bounds on Potential Outcomes
Figure 3 for Estimation of Utility-Maximizing Bounds on Potential Outcomes
Figure 4 for Estimation of Utility-Maximizing Bounds on Potential Outcomes

Estimation of individual treatment effects is often used as the basis for contextual decision making in fields such as healthcare, education, and economics. However, in many real-world applications it is sufficient for the decision maker to have upper and lower bounds on the potential outcomes of decision alternatives, allowing them to evaluate the trade-off between benefit and risk. With this in mind, we develop an algorithm for directly learning upper and lower bounds on the potential outcomes under treatment and non-treatment. Our theoretical analysis highlights a trade-off between the complexity of the learning task and the confidence with which the resulting bounds cover the true potential outcomes; the more confident we wish to be, the more complex the learning task is. We suggest a novel algorithm that maximizes a utility function while maintaining valid potential outcome bounds. We illustrate different properties of our algorithm, and highlight how it can be used to guide decision making using two semi-simulated datasets.

Viaarxiv icon

Short-term Mortality Prediction for Elderly Patients Using Medicare Claims Data

Dec 02, 2017
Maggie Makar, Marzyeh Ghassemi, David Cutler, Ziad Obermeyer

Figure 1 for Short-term Mortality Prediction for Elderly Patients Using Medicare Claims Data
Figure 2 for Short-term Mortality Prediction for Elderly Patients Using Medicare Claims Data
Figure 3 for Short-term Mortality Prediction for Elderly Patients Using Medicare Claims Data
Figure 4 for Short-term Mortality Prediction for Elderly Patients Using Medicare Claims Data

Risk prediction is central to both clinical medicine and public health. While many machine learning models have been developed to predict mortality, they are rarely applied in the clinical literature, where classification tasks typically rely on logistic regression. One reason for this is that existing machine learning models often seek to optimize predictions by incorporating features that are not present in the databases readily available to providers and policy makers, limiting generalizability and implementation. Here we tested a number of machine learning classifiers for prediction of six-month mortality in a population of elderly Medicare beneficiaries, using an administrative claims database of the kind available to the majority of health care payers and providers. We show that machine learning classifiers substantially outperform current widely-used methods of risk prediction but only when used with an improved feature set incorporating insights from clinical medicine, developed for this study. Our work has applications to supporting patient and provider decision making at the end of life, as well as population health-oriented efforts to identify patients at high risk of poor outcomes.

Viaarxiv icon