Alert button
Picture for Cian Eastwood

Cian Eastwood

Alert button

Spuriosity Didn't Kill the Classifier: Using Invariant Predictions to Harness Spurious Features

Jul 19, 2023
Cian Eastwood, Shashank Singh, Andrei Liviu Nicolicioiu, Marin Vlastelica, Julius von Kügelgen, Bernhard Schölkopf

To avoid failures on out-of-distribution data, recent works have sought to extract features that have a stable or invariant relationship with the label across domains, discarding the "spurious" or unstable features whose relationship with the label changes across domains. However, unstable features often carry complementary information about the label that could boost performance if used correctly in the test domain. Our main contribution is to show that it is possible to learn how to use these unstable features in the test domain without labels. In particular, we prove that pseudo-labels based on stable features provide sufficient guidance for doing so, provided that stable and unstable features are conditionally independent given the label. Based on this theoretical insight, we propose Stable Feature Boosting (SFB), an algorithm for: (i) learning a predictor that separates stable and conditionally-independent unstable features; and (ii) using the stable-feature predictions to adapt the unstable-feature predictions in the test domain. Theoretically, we prove that SFB can learn an asymptotically-optimal predictor without test-domain labels. Empirically, we demonstrate the effectiveness of SFB on real and synthetic data.

Viaarxiv icon

DCI-ES: An Extended Disentanglement Framework with Connections to Identifiability

Oct 01, 2022
Cian Eastwood, Andrei Liviu Nicolicioiu, Julius von Kügelgen, Armin Kekić, Frederik Träuble, Andrea Dittadi, Bernhard Schölkopf

Figure 1 for DCI-ES: An Extended Disentanglement Framework with Connections to Identifiability
Figure 2 for DCI-ES: An Extended Disentanglement Framework with Connections to Identifiability
Figure 3 for DCI-ES: An Extended Disentanglement Framework with Connections to Identifiability
Figure 4 for DCI-ES: An Extended Disentanglement Framework with Connections to Identifiability

In representation learning, a common approach is to seek representations which disentangle the underlying factors of variation. Eastwood & Williams (2018) proposed three metrics for quantifying the quality of such disentangled representations: disentanglement (D), completeness (C) and informativeness (I). In this work, we first connect this DCI framework to two common notions of linear and nonlinear identifiability, thus establishing a formal link between disentanglement and the closely-related field of independent component analysis. We then propose an extended DCI-ES framework with two new measures of representation quality - explicitness (E) and size (S) - and point out how D and C can be computed for black-box predictors. Our main idea is that the functional capacity required to use a representation is an important but thus-far neglected aspect of representation quality, which we quantify using explicitness or ease-of-use (E). We illustrate the relevance of our extensions on the MPI3D and Cars3D datasets.

Viaarxiv icon

Probable Domain Generalization via Quantile Risk Minimization

Jul 20, 2022
Cian Eastwood, Alexander Robey, Shashank Singh, Julius von Kügelgen, Hamed Hassani, George J. Pappas, Bernhard Schölkopf

Figure 1 for Probable Domain Generalization via Quantile Risk Minimization
Figure 2 for Probable Domain Generalization via Quantile Risk Minimization
Figure 3 for Probable Domain Generalization via Quantile Risk Minimization
Figure 4 for Probable Domain Generalization via Quantile Risk Minimization

Domain generalization (DG) seeks predictors which perform well on unseen test distributions by leveraging labeled training data from multiple related distributions or domains. To achieve this, the standard formulation optimizes for worst-case performance over the set of all possible domains. However, with worst-case shifts very unlikely in practice, this generally leads to overly-conservative solutions. In fact, a recent study found that no DG algorithm outperformed empirical risk minimization in terms of average performance. In this work, we argue that DG is neither a worst-case problem nor an average-case problem, but rather a probabilistic one. To this end, we propose a probabilistic framework for DG, which we call Probable Domain Generalization, wherein our key idea is that distribution shifts seen during training should inform us of probable shifts at test time. To realize this, we explicitly relate training and test domains as draws from the same underlying meta-distribution, and propose a new optimization problem -- Quantile Risk Minimization (QRM) -- which requires that predictors generalize with high probability. We then prove that QRM: (i) produces predictors that generalize to new domains with a desired probability, given sufficiently many domains and samples; and (ii) recovers the causal predictor as the desired probability of generalization approaches one. In our experiments, we introduce a more holistic quantile-focused evaluation protocol for DG, and show that our algorithms outperform state-of-the-art baselines on real and synthetic data.

Viaarxiv icon

Align-Deform-Subtract: An Interventional Framework for Explaining Object Differences

Mar 09, 2022
Cian Eastwood, Li Nanbo, Christopher K. I. Williams

Figure 1 for Align-Deform-Subtract: An Interventional Framework for Explaining Object Differences
Figure 2 for Align-Deform-Subtract: An Interventional Framework for Explaining Object Differences
Figure 3 for Align-Deform-Subtract: An Interventional Framework for Explaining Object Differences
Figure 4 for Align-Deform-Subtract: An Interventional Framework for Explaining Object Differences

Given two object images, how can we explain their differences in terms of the underlying object properties? To address this question, we propose Align-Deform-Subtract (ADS) -- an interventional framework for explaining object differences. By leveraging semantic alignments in image-space as counterfactual interventions on the underlying object properties, ADS iteratively quantifies and removes differences in object properties. The result is a set of "disentangled" error measures which explain object differences in terms of their underlying properties. Experiments on real and synthetic data illustrate the efficacy of the framework.

Viaarxiv icon

Learning Object-Centric Representations of Multi-Object Scenes from Multiple Views

Nov 13, 2021
Li Nanbo, Cian Eastwood, Robert B. Fisher

Figure 1 for Learning Object-Centric Representations of Multi-Object Scenes from Multiple Views
Figure 2 for Learning Object-Centric Representations of Multi-Object Scenes from Multiple Views
Figure 3 for Learning Object-Centric Representations of Multi-Object Scenes from Multiple Views
Figure 4 for Learning Object-Centric Representations of Multi-Object Scenes from Multiple Views

Learning object-centric representations of multi-object scenes is a promising approach towards machine intelligence, facilitating high-level reasoning and control from visual sensory data. However, current approaches for unsupervised object-centric scene representation are incapable of aggregating information from multiple observations of a scene. As a result, these "single-view" methods form their representations of a 3D scene based only on a single 2D observation (view). Naturally, this leads to several inaccuracies, with these methods falling victim to single-view spatial ambiguities. To address this, we propose The Multi-View and Multi-Object Network (MulMON) -- a method for learning accurate, object-centric representations of multi-object scenes by leveraging multiple views. In order to sidestep the main technical difficulty of the multi-object-multi-view scenario -- maintaining object correspondences across views -- MulMON iteratively updates the latent object representations for a scene over multiple views. To ensure that these iterative updates do indeed aggregate spatial information to form a complete 3D scene understanding, MulMON is asked to predict the appearance of the scene from novel viewpoints during training. Through experiments, we show that MulMON better-resolves spatial ambiguities than single-view methods -- learning more accurate and disentangled object representations -- and also achieves new functionality in predicting object segmentations for novel viewpoints.

* Accepted at NeurIPS 2020 (Spotlight) 
Viaarxiv icon

Source-Free Adaptation to Measurement Shift via Bottom-Up Feature Restoration

Jul 12, 2021
Cian Eastwood, Ian Mason, Christopher K. I. Williams, Bernhard Schölkopf

Figure 1 for Source-Free Adaptation to Measurement Shift via Bottom-Up Feature Restoration
Figure 2 for Source-Free Adaptation to Measurement Shift via Bottom-Up Feature Restoration
Figure 3 for Source-Free Adaptation to Measurement Shift via Bottom-Up Feature Restoration
Figure 4 for Source-Free Adaptation to Measurement Shift via Bottom-Up Feature Restoration

Source-free domain adaptation (SFDA) aims to adapt a model trained on labelled data in a source domain to unlabelled data in a target domain without access to the source-domain data during adaptation. Existing methods for SFDA leverage entropy-minimization techniques which: (i) apply only to classification; (ii) destroy model calibration; and (iii) rely on the source model achieving a good level of feature-space class-separation in the target domain. We address these issues for a particularly pervasive type of domain shift called measurement shift, characterized by a change in measurement system (e.g. a change in sensor or lighting). In the source domain, we store a lightweight and flexible approximation of the feature distribution under the source data. In the target domain, we adapt the feature-extractor such that the approximate feature distribution under the target data realigns with that saved on the source. We call this method Feature Restoration (FR) as it seeks to extract features with the same semantics from the target domain as were previously extracted from the source. We additionally propose Bottom-Up Feature Restoration (BUFR), a bottom-up training scheme for FR which boosts performance by preserving learnt structure in the later layers of a network. Through experiments we demonstrate that BUFR often outperforms existing SFDA methods in terms of accuracy, calibration, and data efficiency, while being less reliant on the performance of the source model in the target domain.

Viaarxiv icon