Alert button
Picture for Li Kheng Chai

Li Kheng Chai

Alert button

To Predict or to Reject: Causal Effect Estimation with Uncertainty on Networked Data

Sep 15, 2023
Hechuan Wen, Tong Chen, Li Kheng Chai, Shazia Sadiq, Kai Zheng, Hongzhi Yin

Figure 1 for To Predict or to Reject: Causal Effect Estimation with Uncertainty on Networked Data
Figure 2 for To Predict or to Reject: Causal Effect Estimation with Uncertainty on Networked Data
Figure 3 for To Predict or to Reject: Causal Effect Estimation with Uncertainty on Networked Data
Figure 4 for To Predict or to Reject: Causal Effect Estimation with Uncertainty on Networked Data

Due to the imbalanced nature of networked observational data, the causal effect predictions for some individuals can severely violate the positivity/overlap assumption, rendering unreliable estimations. Nevertheless, this potential risk of individual-level treatment effect estimation on networked data has been largely under-explored. To create a more trustworthy causal effect estimator, we propose the uncertainty-aware graph deep kernel learning (GraphDKL) framework with Lipschitz constraint to model the prediction uncertainty with Gaussian process and identify unreliable estimations. To the best of our knowledge, GraphDKL is the first framework to tackle the violation of positivity assumption when performing causal effect estimation with graphs. With extensive experiments, we demonstrate the superiority of our proposed method in uncertainty-aware causal effect estimation on networked data.

* Accepted by ICDM'23 
Viaarxiv icon

Variational Counterfactual Prediction under Runtime Domain Corruption

Jun 23, 2023
Hechuan Wen, Tong Chen, Li Kheng Chai, Shazia Sadiq, Junbin Gao, Hongzhi Yin

Figure 1 for Variational Counterfactual Prediction under Runtime Domain Corruption
Figure 2 for Variational Counterfactual Prediction under Runtime Domain Corruption
Figure 3 for Variational Counterfactual Prediction under Runtime Domain Corruption
Figure 4 for Variational Counterfactual Prediction under Runtime Domain Corruption

To date, various neural methods have been proposed for causal effect estimation based on observational data, where a default assumption is the same distribution and availability of variables at both training and inference (i.e., runtime) stages. However, distribution shift (i.e., domain shift) could happen during runtime, and bigger challenges arise from the impaired accessibility of variables. This is commonly caused by increasing privacy and ethical concerns, which can make arbitrary variables unavailable in the entire runtime data and imputation impractical. We term the co-occurrence of domain shift and inaccessible variables runtime domain corruption, which seriously impairs the generalizability of a trained counterfactual predictor. To counter runtime domain corruption, we subsume counterfactual prediction under the notion of domain adaptation. Specifically, we upper-bound the error w.r.t. the target domain (i.e., runtime covariates) by the sum of source domain error and inter-domain distribution distance. In addition, we build an adversarially unified variational causal effect model, named VEGAN, with a novel two-stage adversarial domain adaptation scheme to reduce the latent distribution disparity between treated and control groups first, and between training and runtime variables afterwards. We demonstrate that VEGAN outperforms other state-of-the-art baselines on individual-level treatment effect estimation in the presence of runtime domain corruption on benchmark datasets.

Viaarxiv icon