Alert button
Picture for Dino Sejdinovic

Dino Sejdinovic

Alert button

FaIRGP: A Bayesian Energy Balance Model for Surface Temperatures Emulation

Jul 14, 2023
Shahine Bouabid, Dino Sejdinovic, Duncan Watson-Parris

Figure 1 for FaIRGP: A Bayesian Energy Balance Model for Surface Temperatures Emulation
Figure 2 for FaIRGP: A Bayesian Energy Balance Model for Surface Temperatures Emulation
Figure 3 for FaIRGP: A Bayesian Energy Balance Model for Surface Temperatures Emulation
Figure 4 for FaIRGP: A Bayesian Energy Balance Model for Surface Temperatures Emulation

Emulators, or reduced complexity climate models, are surrogate Earth system models that produce projections of key climate quantities with minimal computational resources. Using time-series modeling or more advanced machine learning techniques, data-driven emulators have emerged as a promising avenue of research, producing spatially resolved climate responses that are visually indistinguishable from state-of-the-art Earth system models. Yet, their lack of physical interpretability limits their wider adoption. In this work, we introduce FaIRGP, a data-driven emulator that satisfies the physical temperature response equations of an energy balance model. The result is an emulator that (i) enjoys the flexibility of statistical machine learning models and can learn from observations, and (ii) has a robust physical grounding with interpretable parameters that can be used to make inference about the climate system. Further, our Bayesian approach allows a principled and mathematically tractable uncertainty quantification. Our model demonstrates skillful emulation of global mean surface temperature and spatial surface temperatures across realistic future scenarios. Its ability to learn from data allows it to outperform energy balance models, while its robust physical foundation safeguards against the pitfalls of purely data-driven models. We also illustrate how FaIRGP can be used to obtain estimates of top-of-atmosphere radiative forcing and discuss the benefits of its mathematical tractability for applications such as detection and attribution or precipitation emulation. We hope that this work will contribute to widening the adoption of data-driven methods in climate emulation.

Viaarxiv icon

Explaining the Uncertain: Stochastic Shapley Values for Gaussian Process Models

May 24, 2023
Siu Lun Chau, Krikamol Muandet, Dino Sejdinovic

Figure 1 for Explaining the Uncertain: Stochastic Shapley Values for Gaussian Process Models
Figure 2 for Explaining the Uncertain: Stochastic Shapley Values for Gaussian Process Models
Figure 3 for Explaining the Uncertain: Stochastic Shapley Values for Gaussian Process Models
Figure 4 for Explaining the Uncertain: Stochastic Shapley Values for Gaussian Process Models

We present a novel approach for explaining Gaussian processes (GPs) that can utilize the full analytical covariance structure present in GPs. Our method is based on the popular solution concept of Shapley values extended to stochastic cooperative games, resulting in explanations that are random variables. The GP explanations generated using our approach satisfy similar favorable axioms to standard Shapley values and possess a tractable covariance function across features and data observations. This covariance allows for quantifying explanation uncertainties and studying the statistical dependencies between explanations. We further extend our framework to the problem of predictive explanation, and propose a Shapley prior over the explanation function to predict Shapley values for new data based on previously computed ones. Our extensive illustrations demonstrate the effectiveness of the proposed approach.

* 26 pages, 6 figures 
Viaarxiv icon

A Rigorous Link between Deep Ensembles and (Variational) Bayesian Methods

May 24, 2023
Veit David Wild, Sahra Ghalebikesabi, Dino Sejdinovic, Jeremias Knoblauch

Figure 1 for A Rigorous Link between Deep Ensembles and (Variational) Bayesian Methods
Figure 2 for A Rigorous Link between Deep Ensembles and (Variational) Bayesian Methods
Figure 3 for A Rigorous Link between Deep Ensembles and (Variational) Bayesian Methods
Figure 4 for A Rigorous Link between Deep Ensembles and (Variational) Bayesian Methods

We establish the first mathematically rigorous link between Bayesian, variational Bayesian, and ensemble methods. A key step towards this it to reformulate the non-convex optimisation problem typically encountered in deep learning as a convex optimisation in the space of probability measures. On a technical level, our contribution amounts to studying generalised variational inference through the lense of Wasserstein gradient flows. The result is a unified theory of various seemingly disconnected approaches that are commonly used for uncertainty quantification in deep learning -- including deep ensembles and (variational) Bayesian methods. This offers a fresh perspective on the reasons behind the success of deep ensembles over procedures based on parameterised variational inference, and allows the derivation of new ensembling schemes with convergence guarantees. We showcase this by proposing a family of interacting deep ensembles with direct parallels to the interactions of particle systems in thermodynamics, and use our theory to prove the convergence of these algorithms to a well-defined global minimiser on the space of probability measures.

Viaarxiv icon

Squared Neural Families: A New Class of Tractable Density Models

May 22, 2023
Russell Tsuchida, Cheng Soon Ong, Dino Sejdinovic

Figure 1 for Squared Neural Families: A New Class of Tractable Density Models
Figure 2 for Squared Neural Families: A New Class of Tractable Density Models
Figure 3 for Squared Neural Families: A New Class of Tractable Density Models
Figure 4 for Squared Neural Families: A New Class of Tractable Density Models

Flexible models for probability distributions are an essential ingredient in many machine learning tasks. We develop and investigate a new class of probability distributions, which we call a Squared Neural Family (SNEFY), formed by squaring the 2-norm of a neural network and normalising it with respect to a base measure. Following the reasoning similar to the well established connections between infinitely wide neural networks and Gaussian processes, we show that SNEFYs admit a closed form normalising constants in many cases of interest, thereby resulting in flexible yet fully tractable density models. SNEFYs strictly generalise classical exponential families, are closed under conditioning, and have tractable marginal distributions. Their utility is illustrated on a variety of density estimation and conditional density estimation tasks. Software available at https://github.com/RussellTsuchida/snefy.

* Preprint 
Viaarxiv icon

Returning The Favour: When Regression Benefits From Probabilistic Causal Knowledge

Jan 26, 2023
Shahine Bouabid, Jake Fawkes, Dino Sejdinovic

Figure 1 for Returning The Favour: When Regression Benefits From Probabilistic Causal Knowledge
Figure 2 for Returning The Favour: When Regression Benefits From Probabilistic Causal Knowledge
Figure 3 for Returning The Favour: When Regression Benefits From Probabilistic Causal Knowledge
Figure 4 for Returning The Favour: When Regression Benefits From Probabilistic Causal Knowledge

A directed acyclic graph (DAG) provides valuable prior knowledge that is often discarded in regression tasks in machine learning. We show that the independences arising from the presence of collider structures in DAGs provide meaningful inductive biases, which constrain the regression hypothesis space and improve predictive performance. We introduce collider regression, a framework to incorporate probabilistic causal knowledge from a collider in a regression problem. When the hypothesis space is a reproducing kernel Hilbert space, we prove a strictly positive generalisation benefit under mild assumptions and provide closed-form estimators of the empirical risk minimiser. Experiments on synthetic and climate model data demonstrate performance gains of the proposed methodology.

Viaarxiv icon

Doubly Robust Kernel Statistics for Testing Distributional Treatment Effects Even Under One Sided Overlap

Dec 09, 2022
Jake Fawkes, Robert Hu, Robin J. Evans, Dino Sejdinovic

Figure 1 for Doubly Robust Kernel Statistics for Testing Distributional Treatment Effects Even Under One Sided Overlap
Figure 2 for Doubly Robust Kernel Statistics for Testing Distributional Treatment Effects Even Under One Sided Overlap
Figure 3 for Doubly Robust Kernel Statistics for Testing Distributional Treatment Effects Even Under One Sided Overlap
Figure 4 for Doubly Robust Kernel Statistics for Testing Distributional Treatment Effects Even Under One Sided Overlap

As causal inference becomes more widespread the importance of having good tools to test for causal effects increases. In this work we focus on the problem of testing for causal effects that manifest in a difference in distribution for treatment and control. We build on work applying kernel methods to causality, considering the previously introduced Counterfactual Mean Embedding framework (\textsc{CfME}). We improve on this by proposing the \emph{Doubly Robust Counterfactual Mean Embedding} (\textsc{DR-CfME}), which has better theoretical properties than its predecessor by leveraging semiparametric theory. This leads us to propose new kernel based test statistics for distributional effects which are based upon doubly robust estimators of treatment effects. We propose two test statistics, one which is a direct improvement on previous work and one which can be applied even when the support of the treatment arm is a subset of that of the control arm. We demonstrate the validity of our methods on simulated and real-world data, as well as giving an application in off-policy evaluation.

* 9 pages, Preprint 
Viaarxiv icon

Bayesian Counterfactual Mean Embeddings and Off-Policy Evaluation

Nov 02, 2022
Diego Martinez-Taboada, Dino Sejdinovic

Figure 1 for Bayesian Counterfactual Mean Embeddings and Off-Policy Evaluation
Figure 2 for Bayesian Counterfactual Mean Embeddings and Off-Policy Evaluation
Figure 3 for Bayesian Counterfactual Mean Embeddings and Off-Policy Evaluation
Figure 4 for Bayesian Counterfactual Mean Embeddings and Off-Policy Evaluation

The counterfactual distribution models the effect of the treatment in the untreated group. While most of the work focuses on the expected values of the treatment effect, one may be interested in the whole counterfactual distribution or other quantities associated to it. Building on the framework of Bayesian conditional mean embeddings, we propose a Bayesian approach for modeling the counterfactual distribution, which leads to quantifying the epistemic uncertainty about the distribution. The framework naturally extends to the setting where one observes multiple treatment effects (e.g. an intermediate effect after an interim period, and an ultimate treatment effect which is of main interest) and allows for additionally modelling uncertainty about the relationship of these effects. For such goal, we present three novel Bayesian methods to estimate the expectation of the ultimate treatment effect, when only noisy samples of the dependence between intermediate and ultimate effects are provided. These methods differ on the source of uncertainty considered and allow for combining two sources of data. Moreover, we generalize these ideas to the off-policy evaluation framework, which can be seen as an extension of the counterfactual estimation problem. We empirically explore the calibration of the algorithms in two different experimental settings which require data fusion, and illustrate the value of considering the uncertainty stemming from the two sources of data.

Viaarxiv icon

Sequential Decision Making on Unmatched Data using Bayesian Kernel Embeddings

Oct 25, 2022
Diego Martinez-Taboada, Dino Sejdinovic

Figure 1 for Sequential Decision Making on Unmatched Data using Bayesian Kernel Embeddings
Figure 2 for Sequential Decision Making on Unmatched Data using Bayesian Kernel Embeddings

The problem of sequentially maximizing the expectation of a function seeks to maximize the expected value of a function of interest without having direct control on its features. Instead, the distribution of such features depends on a given context and an action taken by an agent. In contrast to Bayesian optimization, the arguments of the function are not under agent's control, but are indirectly determined by the agent's action based on a given context. If the information of the features is to be included in the maximization problem, the full conditional distribution of such features, rather than its expectation only, needs to be accounted for. Furthermore, the function is itself unknown, only counting with noisy observations of such function, and potentially requiring the use of unmatched data sets. We propose a novel algorithm for the aforementioned problem which takes into consideration the uncertainty derived from the estimation of both the conditional distribution of the features and the unknown function, by modeling the former as a Bayesian conditional mean embedding and the latter as a Gaussian process. Our algorithm empirically outperforms the current state-of-the-art algorithm in the experiments conducted.

Viaarxiv icon

Kernel Biclustering algorithm in Hilbert Spaces

Aug 07, 2022
Marcos Matabuena, J. C Vidal, Oscar Hernan Madrid Padilla, Dino Sejdinovic

Figure 1 for Kernel Biclustering algorithm in Hilbert Spaces
Figure 2 for Kernel Biclustering algorithm in Hilbert Spaces
Figure 3 for Kernel Biclustering algorithm in Hilbert Spaces
Figure 4 for Kernel Biclustering algorithm in Hilbert Spaces

Biclustering algorithms partition data and covariates simultaneously, providing new insights in several domains, such as analyzing gene expression to discover new biological functions. This paper develops a new model-free biclustering algorithm in abstract spaces using the notions of energy distance (ED) and the maximum mean discrepancy (MMD) -- two distances between probability distributions capable of handling complex data such as curves or graphs. The proposed method can learn more general and complex cluster shapes than most existing literature approaches, which usually focus on detecting mean and variance differences. Although the biclustering configurations of our approach are constrained to create disjoint structures at the datum and covariate levels, the results are competitive. Our results are similar to state-of-the-art methods in their optimal scenarios, assuming a proper kernel choice, outperforming them when cluster differences are concentrated in higher-order moments. The model's performance has been tested in several situations that involve simulated and real-world datasets. Finally, new theoretical consistency results are established using some tools of the theory of optimal transport.

Viaarxiv icon

Discussion of `Multiscale Fisher's Independence Test for Multivariate Dependence'

Jun 22, 2022
Antonin Schrab, Wittawat Jitkrittum, Zoltán Szabó, Dino Sejdinovic, Arthur Gretton

Figure 1 for Discussion of `Multiscale Fisher's Independence Test for Multivariate Dependence'
Figure 2 for Discussion of `Multiscale Fisher's Independence Test for Multivariate Dependence'
Figure 3 for Discussion of `Multiscale Fisher's Independence Test for Multivariate Dependence'

We discuss how MultiFIT, the Multiscale Fisher's Independence Test for Multivariate Dependence proposed by Gorsky and Ma (2022), compares to existing linear-time kernel tests based on the Hilbert-Schmidt independence criterion (HSIC). We highlight the fact that the levels of the kernel tests at any finite sample size can be controlled exactly, as it is the case with the level of MultiFIT. In our experiments, we observe some of the performance limitations of MultiFIT in terms of test power.

* 8 pages 
Viaarxiv icon