Alert button
Picture for Wenbo Gong

Wenbo Gong

Alert button

BayesDAG: Gradient-Based Posterior Sampling for Causal Discovery

Jul 26, 2023
Yashas Annadani, Nick Pawlowski, Joel Jennings, Stefan Bauer, Cheng Zhang, Wenbo Gong

Figure 1 for BayesDAG: Gradient-Based Posterior Sampling for Causal Discovery
Figure 2 for BayesDAG: Gradient-Based Posterior Sampling for Causal Discovery
Figure 3 for BayesDAG: Gradient-Based Posterior Sampling for Causal Discovery
Figure 4 for BayesDAG: Gradient-Based Posterior Sampling for Causal Discovery

Bayesian causal discovery aims to infer the posterior distribution over causal models from observed data, quantifying epistemic uncertainty and benefiting downstream tasks. However, computational challenges arise due to joint inference over combinatorial space of Directed Acyclic Graphs (DAGs) and nonlinear functions. Despite recent progress towards efficient posterior inference over DAGs, existing methods are either limited to variational inference on node permutation matrices for linear causal models, leading to compromised inference accuracy, or continuous relaxation of adjacency matrices constrained by a DAG regularizer, which cannot ensure resulting graphs are DAGs. In this work, we introduce a scalable Bayesian causal discovery framework based on stochastic gradient Markov Chain Monte Carlo (SG-MCMC) that overcomes these limitations. Our approach directly samples DAGs from the posterior without requiring any DAG regularization, simultaneously draws function parameter samples and is applicable to both linear and nonlinear causal models. To enable our approach, we derive a novel equivalence to the permutation-based DAG learning, which opens up possibilities of using any relaxed gradient estimator defined over permutations. To our knowledge, this is the first framework applying gradient-based MCMC sampling for causal discovery. Empirical evaluations on synthetic and real-world datasets demonstrate our approach's effectiveness compared to state-of-the-art baselines.

Viaarxiv icon

Understanding Causality with Large Language Models: Feasibility and Opportunities

Apr 11, 2023
Cheng Zhang, Stefan Bauer, Paul Bennett, Jiangfeng Gao, Wenbo Gong, Agrin Hilmkil, Joel Jennings, Chao Ma, Tom Minka, Nick Pawlowski, James Vaughan

Figure 1 for Understanding Causality with Large Language Models: Feasibility and Opportunities
Figure 2 for Understanding Causality with Large Language Models: Feasibility and Opportunities
Figure 3 for Understanding Causality with Large Language Models: Feasibility and Opportunities
Figure 4 for Understanding Causality with Large Language Models: Feasibility and Opportunities

We assess the ability of large language models (LLMs) to answer causal questions by analyzing their strengths and weaknesses against three types of causal question. We believe that current LLMs can answer causal questions with existing causal knowledge as combined domain experts. However, they are not yet able to provide satisfactory answers for discovering new knowledge or for high-stakes decision-making tasks with high precision. We discuss possible future directions and opportunities, such as enabling explicit and implicit causal modules as well as deep causal-aware LLMs. These will not only enable LLMs to answer many different types of causal questions for greater impact but also enable LLMs to be more trustworthy and efficient in general.

Viaarxiv icon

Rhino: Deep Causal Temporal Relationship Learning With History-dependent Noise

Oct 26, 2022
Wenbo Gong, Joel Jennings, Cheng Zhang, Nick Pawlowski

Figure 1 for Rhino: Deep Causal Temporal Relationship Learning With History-dependent Noise
Figure 2 for Rhino: Deep Causal Temporal Relationship Learning With History-dependent Noise
Figure 3 for Rhino: Deep Causal Temporal Relationship Learning With History-dependent Noise
Figure 4 for Rhino: Deep Causal Temporal Relationship Learning With History-dependent Noise

Discovering causal relationships between different variables from time series data has been a long-standing challenge for many domains such as climate science, finance, and healthcare. Given the complexity of real-world relationships and the nature of observations in discrete time, causal discovery methods need to consider non-linear relations between variables, instantaneous effects and history-dependent noise (the change of noise distribution due to past actions). However, previous works do not offer a solution addressing all these problems together. In this paper, we propose a novel causal relationship learning framework for time-series data, called Rhino, which combines vector auto-regression, deep learning and variational inference to model non-linear relationships with instantaneous effects while allowing the noise distribution to be modulated by historical observations. Theoretically, we prove the structural identifiability of Rhino. Our empirical results from extensive synthetic experiments and two real-world benchmarks demonstrate better discovery performance compared to relevant baselines, with ablation studies revealing its robustness under model misspecification.

* 28 pages, 8 figures, 5 tables 
Viaarxiv icon

NeurIPS Competition Instructions and Guide: Causal Insights for Learning Paths in Education

Aug 31, 2022
Wenbo Gong, Digory Smith, Zichao Wang, Craig Barton, Simon Woodhead, Nick Pawlowski, Joel Jennings, Cheng Zhang

Figure 1 for NeurIPS Competition Instructions and Guide: Causal Insights for Learning Paths in Education
Figure 2 for NeurIPS Competition Instructions and Guide: Causal Insights for Learning Paths in Education
Figure 3 for NeurIPS Competition Instructions and Guide: Causal Insights for Learning Paths in Education

In this competition, participants will address two fundamental causal challenges in machine learning in the context of education using time-series data. The first is to identify the causal relationships between different constructs, where a construct is defined as the smallest element of learning. The second challenge is to predict the impact of learning one construct on the ability to answer questions on other constructs. Addressing these challenges will enable optimisation of students' knowledge acquisition, which can be deployed in a real edtech solution impacting millions of students. Participants will run these tasks in an idealised environment with synthetic data and a real-world scenario with evaluation data collected from a series of A/B tests.

* 19 pages, NeurIPS 2022 Competition Track 
Viaarxiv icon

Deep End-to-end Causal Inference

Feb 04, 2022
Tomas Geffner, Javier Antoran, Adam Foster, Wenbo Gong, Chao Ma, Emre Kiciman, Amit Sharma, Angus Lamb, Martin Kukla, Nick Pawlowski, Miltiadis Allamanis, Cheng Zhang

Figure 1 for Deep End-to-end Causal Inference
Figure 2 for Deep End-to-end Causal Inference
Figure 3 for Deep End-to-end Causal Inference
Figure 4 for Deep End-to-end Causal Inference

Causal inference is essential for data-driven decision making across domains such as business engagement, medical treatment or policy making. However, research on causal discovery and inference has evolved separately, and the combination of the two domains is not trivial. In this work, we develop Deep End-to-end Causal Inference (DECI), a single flow-based method that takes in observational data and can perform both causal discovery and inference, including conditional average treatment effect (CATE) estimation. We provide a theoretical guarantee that DECI can recover the ground truth causal graph under mild assumptions. In addition, our method can handle heterogeneous, real-world, mixed-type data with missing values, allowing for both continuous and discrete treatment decisions. Moreover, the design principle of our method can generalize beyond DECI, providing a general End-to-end Causal Inference (ECI) recipe, which enables different ECI frameworks to be built using existing methods. Our results show the superior performance of DECI when compared to relevant baselines for both causal discovery and (C)ATE estimation in over a thousand experiments on both synthetic datasets and other causal machine learning benchmark datasets.

Viaarxiv icon

Interpreting diffusion score matching using normalizing flow

Jul 21, 2021
Wenbo Gong, Yingzhen Li

Figure 1 for Interpreting diffusion score matching using normalizing flow
Figure 2 for Interpreting diffusion score matching using normalizing flow

Scoring matching (SM), and its related counterpart, Stein discrepancy (SD) have achieved great success in model training and evaluations. However, recent research shows their limitations when dealing with certain types of distributions. One possible fix is incorporating the original score matching (or Stein discrepancy) with a diffusion matrix, which is called diffusion score matching (DSM) (or diffusion Stein discrepancy (DSD)). However, the lack of interpretation of the diffusion limits its usage within simple distributions and manually chosen matrix. In this work, we plan to fill this gap by interpreting the diffusion matrix using normalizing flows. Specifically, we theoretically prove that DSM (or DSD) is equivalent to the original score matching (or Stein discrepancy) evaluated in the transformed space defined by the normalizing flow, where the diffusion matrix is the inverse of the flow's Jacobian matrix. In addition, we also build its connection to Riemannian manifolds and further extend it to continuous flows, where the change of DSM is characterized by an ODE.

* 8 pages, International Conference on Machine Learning (ICML) INNF+ 2021 Workshop Spotlight 
Viaarxiv icon

Active Slices for Sliced Stein Discrepancy

Feb 08, 2021
Wenbo Gong, Kaibo Zhang, Yingzhen Li, José Miguel Hernández-Lobato

Figure 1 for Active Slices for Sliced Stein Discrepancy
Figure 2 for Active Slices for Sliced Stein Discrepancy
Figure 3 for Active Slices for Sliced Stein Discrepancy
Figure 4 for Active Slices for Sliced Stein Discrepancy

Sliced Stein discrepancy (SSD) and its kernelized variants have demonstrated promising successes in goodness-of-fit tests and model learning in high dimensions. Despite their theoretical elegance, their empirical performance depends crucially on the search of optimal slicing directions to discriminate between two distributions. Unfortunately, previous gradient-based optimisation approaches for this task return sub-optimal results: they are computationally expensive, sensitive to initialization, and they lack theoretical guarantees for convergence. We address these issues in two steps. First, we provide theoretical results stating that the requirement of using optimal slicing directions in the kernelized version of SSD can be relaxed, validating the resulting discrepancy with finite random slicing directions. Second, given that good slicing directions are crucial for practical performance, we propose a fast algorithm for finding such slicing directions based on ideas of active sub-space construction and spectral decomposition. Experiments on goodness-of-fit tests and model learning show that our approach achieves both improved performance and faster convergence. Especially, we demonstrate a 14-80x speed-up in goodness-of-fit tests when comparing with gradient-based alternatives.

* 22 pages, 7 figures 
Viaarxiv icon

Sliced Kernelized Stein Discrepancy

Jun 30, 2020
Wenbo Gong, Yingzhen Li, José Miguel Hernández-Lobato

Figure 1 for Sliced Kernelized Stein Discrepancy
Figure 2 for Sliced Kernelized Stein Discrepancy
Figure 3 for Sliced Kernelized Stein Discrepancy
Figure 4 for Sliced Kernelized Stein Discrepancy

Kernelized Stein discrepancy (KSD), though being extensively used in goodness-of-fit tests and model learning, suffers from the curse-of-dimensionality. We address this issue by proposing the sliced Stein discrepancy and its scalable and kernelized variants, which employs kernel-based test functions defined on the optimal onedimensional projections instead of the full input in high dimensions. When applied to goodness-of-fit tests, extensive experiments show the proposed discrepancy significantly outperforms KSD and various baselines in high dimensions. For model learning, we show its advantages by training an independent component analysis when compared with existing Stein discrepancy baselines. We further propose a novel particle inference method called sliced Stein variational gradient descent (S-SVGD) which alleviates the mode-collapse issue of SVGD in training variational autoencoders.

Viaarxiv icon

Icebreaker: Element-wise Active Information Acquisition with Bayesian Deep Latent Gaussian Model

Aug 14, 2019
Wenbo Gong, Sebastian Tschiatschek, Richard Turner, Sebastian Nowozin, José Miguel Hernández-Lobato, Cheng Zhang

Figure 1 for Icebreaker: Element-wise Active Information Acquisition with Bayesian Deep Latent Gaussian Model
Figure 2 for Icebreaker: Element-wise Active Information Acquisition with Bayesian Deep Latent Gaussian Model
Figure 3 for Icebreaker: Element-wise Active Information Acquisition with Bayesian Deep Latent Gaussian Model
Figure 4 for Icebreaker: Element-wise Active Information Acquisition with Bayesian Deep Latent Gaussian Model

In this paper we introduce the ice-start problem, i.e., the challenge of deploying machine learning models when only little or no training data is initially available, and acquiring each feature element of data is associated with costs. This setting is representative for the real-world machine learning applications. For instance, in the health-care domain, when training an AI system for predicting patient metrics from lab tests, obtaining every single measurement comes with a high cost. Active learning, where only the label is associated with a cost does not apply to such problem, because performing all possible lab tests to acquire a new training datum would be costly, as well as unnecessary due to redundancy. We propose Icebreaker, a principled framework to approach the ice-start problem. Icebreaker uses a full Bayesian Deep Latent Gaussian Model (BELGAM) with a novel inference method. Our proposed method combines recent advances in amortized inference and stochastic gradient MCMC to enable fast and accurate posterior inference. By utilizing BELGAM's ability to fully quantify model uncertainty, we also propose two information acquisition functions for imputation and active prediction problems. We demonstrate that BELGAM performs significantly better than the previous VAE (Variational autoencoder) based models, when the data set size is small, using both machine learning benchmarks and real-world recommender systems and health-care applications. Moreover, based on BELGAM, Icebreaker further improves the performance and demonstrate the ability to use minimum amount of the training data to obtain the highest test time performance.

Viaarxiv icon

Meta-Learning for Stochastic Gradient MCMC

Jun 12, 2018
Wenbo Gong, Yingzhen Li, José Miguel Hernández-Lobato

Figure 1 for Meta-Learning for Stochastic Gradient MCMC
Figure 2 for Meta-Learning for Stochastic Gradient MCMC
Figure 3 for Meta-Learning for Stochastic Gradient MCMC
Figure 4 for Meta-Learning for Stochastic Gradient MCMC

Stochastic gradient Markov chain Monte Carlo (SG-MCMC) has become increasingly popular for simulating posterior samples in large-scale Bayesian modeling. However, existing SG-MCMC schemes are not tailored to any specific probabilistic model, even a simple modification of the underlying dynamical system requires significant physical intuition. This paper presents the first meta-learning algorithm that allows automated design for the underlying continuous dynamics of an SG-MCMC sampler. The learned sampler generalizes Hamiltonian dynamics with state-dependent drift and diffusion, enabling fast traversal and efficient exploration of neural network energy landscapes. Experiments validate the proposed approach on both Bayesian fully connected neural network and Bayesian recurrent neural network tasks, showing that the learned sampler out-performs generic, hand-designed SG-MCMC algorithms, and generalizes to different datasets and larger architectures.

Viaarxiv icon