Alert button
Picture for Nicolai Meinshausen

Nicolai Meinshausen

Alert button

Engression: Extrapolation for Nonlinear Regression?

Jul 03, 2023
Xinwei Shen, Nicolai Meinshausen

Extrapolation is crucial in many statistical and machine learning applications, as it is common to encounter test data outside the training support. However, extrapolation is a considerable challenge for nonlinear models. Conventional models typically struggle in this regard: while tree ensembles provide a constant prediction beyond the support, neural network predictions tend to become uncontrollable. This work aims at providing a nonlinear regression methodology whose reliability does not break down immediately at the boundary of the training support. Our primary contribution is a new method called `engression' which, at its core, is a distributional regression technique for pre-additive noise models, where the noise is added to the covariates before applying a nonlinear transformation. Our experimental results indicate that this model is typically suitable for many real data sets. We show that engression can successfully perform extrapolation under some assumptions such as a strictly monotone function class, whereas traditional regression approaches such as least-squares regression and quantile regression fall short under the same assumptions. We establish the advantages of engression over existing approaches in terms of extrapolation, showing that engression consistently provides a meaningful improvement. Our empirical results, from both simulated and real data, validate these findings, highlighting the effectiveness of the engression method. The software implementations of engression are available in both R and Python.

Viaarxiv icon

Confidence and Uncertainty Assessment for Distributional Random Forests

Feb 11, 2023
Jeffrey Näf, Corinne Emmenegger, Peter Bühlmann, Nicolai Meinshausen

Figure 1 for Confidence and Uncertainty Assessment for Distributional Random Forests
Figure 2 for Confidence and Uncertainty Assessment for Distributional Random Forests
Figure 3 for Confidence and Uncertainty Assessment for Distributional Random Forests
Figure 4 for Confidence and Uncertainty Assessment for Distributional Random Forests

The Distributional Random Forest (DRF) is a recently introduced Random Forest algorithm to estimate multivariate conditional distributions. Due to its general estimation procedure, it can be employed to estimate a wide range of targets such as conditional average treatment effects, conditional quantiles, and conditional correlations. However, only results about the consistency and convergence rate of the DRF prediction are available so far. We characterize the asymptotic distribution of DRF and develop a bootstrap approximation of it. This allows us to derive inferential tools for quantifying standard errors and the construction of confidence regions that have asymptotic coverage guarantees. In simulation studies, we empirically validate the developed theory for inference of low-dimensional targets and for testing distributional differences between two populations.

Viaarxiv icon

Robust detection and attribution of climate change under interventions

Dec 09, 2022
Enikő Székely, Sebastian Sippel, Nicolai Meinshausen, Guillaume Obozinski, Reto Knutti

Figure 1 for Robust detection and attribution of climate change under interventions
Figure 2 for Robust detection and attribution of climate change under interventions
Figure 3 for Robust detection and attribution of climate change under interventions
Figure 4 for Robust detection and attribution of climate change under interventions

Fingerprints are key tools in climate change detection and attribution (D&A) that are used to determine whether changes in observations are different from internal climate variability (detection), and whether observed changes can be assigned to specific external drivers (attribution). We propose a direct D&A approach based on supervised learning to extract fingerprints that lead to robust predictions under relevant interventions on exogenous variables, i.e., climate drivers other than the target. We employ anchor regression, a distributionally-robust statistical learning method inspired by causal inference that extrapolates well to perturbed data under the interventions considered. The residuals from the prediction achieve either uncorrelatedness or mean independence with the exogenous variables, thus guaranteeing robustness. We define D&A as a unified hypothesis testing framework that relies on the same statistical model but uses different targets and test statistics. In the experiments, we first show that the CO2 forcing can be robustly predicted from temperature spatial patterns under strong interventions on the solar forcing. Second, we illustrate attribution to the greenhouse gases and aerosols while protecting against interventions on the aerosols and CO2 forcing, respectively. Our study shows that incorporating robustness constraints against relevant interventions may significantly benefit detection and attribution of climate change.

Viaarxiv icon

Scalable Sensitivity and Uncertainty Analysis for Causal-Effect Estimates of Continuous-Valued Interventions

Apr 26, 2022
Andrew Jesson, Alyson Douglas, Peter Manshausen, Nicolai Meinshausen, Philip Stier, Yarin Gal, Uri Shalit

Figure 1 for Scalable Sensitivity and Uncertainty Analysis for Causal-Effect Estimates of Continuous-Valued Interventions
Figure 2 for Scalable Sensitivity and Uncertainty Analysis for Causal-Effect Estimates of Continuous-Valued Interventions
Figure 3 for Scalable Sensitivity and Uncertainty Analysis for Causal-Effect Estimates of Continuous-Valued Interventions
Figure 4 for Scalable Sensitivity and Uncertainty Analysis for Causal-Effect Estimates of Continuous-Valued Interventions

Estimating the effects of continuous-valued interventions from observational data is critically important in fields such as climate science, healthcare, and economics. Recent work focuses on designing neural-network architectures and regularization functions to allow for scalable estimation of average and individual-level dose response curves from high-dimensional, large-sample data. Such methodologies assume ignorability (all confounding variables are observed) and positivity (all levels of treatment can be observed for every unit described by a given covariate value), which are especially challenged in the continuous treatment regime. Developing scalable sensitivity and uncertainty analyses that allow us to understand the ignorance induced in our estimates when these assumptions are relaxed receives less attention. Here, we develop a continuous treatment-effect marginal sensitivity model (CMSM) and derive bounds that agree with both the observed data and a researcher-defined level of hidden confounding. We introduce a scalable algorithm to derive the bounds and uncertainty-aware deep models to efficiently estimate these bounds for high-dimensional, large-sample observational data. We validate our methods using both synthetic and real-world experiments. For the latter, we work in concert with climate scientists interested in evaluating the climatological impacts of human emissions on cloud properties using satellite observations from the past 15 years: a finite-data problem known to be complicated by the presence of a multitude of unobserved confounders.

* 22 pages 
Viaarxiv icon

fairadapt: Causal Reasoning for Fair Data Pre-processing

Oct 19, 2021
Drago Plečko, Nicolas Bennett, Nicolai Meinshausen

Figure 1 for fairadapt: Causal Reasoning for Fair Data Pre-processing
Figure 2 for fairadapt: Causal Reasoning for Fair Data Pre-processing

Machine learning algorithms are useful for various predictions tasks, but they can also learn how to discriminate, based on gender, race or other sensitive attributes. This realization gave rise to the field of fair machine learning, which aims to measure and mitigate such algorithmic bias. This manuscript describes the R-package fairadapt, which implements a causal inference pre-processing method. By making use of a causal graphical model and the observed data, the method can be used to address hypothetical questions of the form "What would my salary have been, had I been of a different gender/race?". Such individual level counterfactual reasoning can help eliminate discrimination and help justify fair decisions. We also discuss appropriate relaxations which assume certain causal pathways from the sensitive attribute to the outcome are not discriminatory.

* Keywords: algorithmic fairness, causal inference, machine learning 
Viaarxiv icon

Predicting sepsis in multi-site, multi-national intensive care cohorts using deep learning

Jul 12, 2021
Michael Moor, Nicolas Bennet, Drago Plecko, Max Horn, Bastian Rieck, Nicolai Meinshausen, Peter Bühlmann, Karsten Borgwardt

Figure 1 for Predicting sepsis in multi-site, multi-national intensive care cohorts using deep learning
Figure 2 for Predicting sepsis in multi-site, multi-national intensive care cohorts using deep learning
Figure 3 for Predicting sepsis in multi-site, multi-national intensive care cohorts using deep learning
Figure 4 for Predicting sepsis in multi-site, multi-national intensive care cohorts using deep learning

Despite decades of clinical research, sepsis remains a global public health crisis with high mortality, and morbidity. Currently, when sepsis is detected and the underlying pathogen is identified, organ damage may have already progressed to irreversible stages. Effective sepsis management is therefore highly time-sensitive. By systematically analysing trends in the plethora of clinical data available in the intensive care unit (ICU), an early prediction of sepsis could lead to earlier pathogen identification, resistance testing, and effective antibiotic and supportive treatment, and thereby become a life-saving measure. Here, we developed and validated a machine learning (ML) system for the prediction of sepsis in the ICU. Our analysis represents the largest multi-national, multi-centre in-ICU study for sepsis prediction using ML to date. Our dataset contains $156,309$ unique ICU admissions, which represent a refined and harmonised subset of five large ICU databases originating from three countries. Using the international consensus definition Sepsis-3, we derived hourly-resolved sepsis label annotations, amounting to $26,734$ ($17.1\%$) septic stays. We compared our approach, a deep self-attention model, to several clinical baselines as well as ML baselines and performed an extensive internal and external validation within and across databases. On average, our model was able to predict sepsis with an AUROC of $0.847 \pm 0.050$ (internal out-of sample validation) and $0.761 \pm 0.052$ (external validation). For a harmonised prevalence of $17\%$, at $80\%$ recall our model detects septic patients with $39\%$ precision 3.7 hours in advance.

Viaarxiv icon

Distributional Random Forests: Heterogeneity Adjustment and Multivariate Distributional Regression

May 29, 2020
Domagoj Ćevid, Loris Michel, Nicolai Meinshausen, Peter Bühlmann

Figure 1 for Distributional Random Forests: Heterogeneity Adjustment and Multivariate Distributional Regression
Figure 2 for Distributional Random Forests: Heterogeneity Adjustment and Multivariate Distributional Regression
Figure 3 for Distributional Random Forests: Heterogeneity Adjustment and Multivariate Distributional Regression
Figure 4 for Distributional Random Forests: Heterogeneity Adjustment and Multivariate Distributional Regression

We propose an adaptation of the Random Forest algorithm to estimate the conditional distribution of a possibly multivariate response. We suggest a new splitting criterion based on the MMD two-sample test, which is suitable for detecting heterogeneity in multivariate distributions. The weights provided by the forest can be conveniently used as an input to other methods in order to locally solve various learning problems. The code is available as \texttt{R}-package \texttt{drf}.

Viaarxiv icon

Fair Data Adaptation with Quantile Preservation

Nov 15, 2019
Drago Plečko, Nicolai Meinshausen

Figure 1 for Fair Data Adaptation with Quantile Preservation
Figure 2 for Fair Data Adaptation with Quantile Preservation
Figure 3 for Fair Data Adaptation with Quantile Preservation
Figure 4 for Fair Data Adaptation with Quantile Preservation

Fairness of classification and regression has received much attention recently and various, partially non-compatible, criteria have been proposed. The fairness criteria can be enforced for a given classifier or, alternatively, the data can be adapated to ensure that every classifier trained on the data will adhere to desired fairness criteria. We present a practical data adaption method based on quantile preservation in causal structural equation models. The data adaptation is based on a presumed counterfactual model for the data. While the counterfactual model itself cannot be verified experimentally, we show that certain population notions of fairness are still guaranteed even if the counterfactual model is misspecified. The precise nature of the fulfilled non-causal fairness notion (such as demographic parity, separation or sufficiency) depends on the structure of the underlying causal model and the choice of resolving variables. We describe an implementation of the proposed data adaptation procedure based on Random Forests and demonstrate its practical use on simulated and real-world data.

Viaarxiv icon

The xyz algorithm for fast interaction search in high-dimensional data

Sep 17, 2018
Gian-Andrea Thanei, Nicolai Meinshausen, Rajen D. Shah

Figure 1 for The xyz algorithm for fast interaction search in high-dimensional data
Figure 2 for The xyz algorithm for fast interaction search in high-dimensional data
Figure 3 for The xyz algorithm for fast interaction search in high-dimensional data
Figure 4 for The xyz algorithm for fast interaction search in high-dimensional data

When performing regression on a dataset with $p$ variables, it is often of interest to go beyond using main linear effects and include interactions as products between individual variables. For small-scale problems, these interactions can be computed explicitly but this leads to a computational complexity of at least $\mathcal{O}(p^2)$ if done naively. This cost can be prohibitive if $p$ is very large. We introduce a new randomised algorithm that is able to discover interactions with high probability and under mild conditions has a runtime that is subquadratic in $p$. We show that strong interactions can be discovered in almost linear time, whilst finding weaker interactions requires $\mathcal{O}(p^\alpha)$ operations for $1 < \alpha < 2$ depending on their strength. The underlying idea is to transform interaction search into a closestpair problem which can be solved efficiently in subquadratic time. The algorithm is called $\mathit{xyz}$ and is implemented in the language R. We demonstrate its efficiency for application to genome-wide association studies, where more than $10^{11}$ interactions can be screened in under $280$ seconds with a single-core $1.2$ GHz CPU.

* JMLR. Journal of Machine Learning Research. The xyz algorithm for fast interaction search in high-dimensional data. Gian-Andrea Thanei, Nicolai Meinshausen, Rajen D. Shah. 19.37.1.42. 2018  
Viaarxiv icon