Alert button
Picture for Yoonho Lee

Yoonho Lee

Alert button

Confidence-Based Model Selection: When to Take Shortcuts for Subpopulation Shifts

Jun 19, 2023
Annie S. Chen, Yoonho Lee, Amrith Setlur, Sergey Levine, Chelsea Finn

Figure 1 for Confidence-Based Model Selection: When to Take Shortcuts for Subpopulation Shifts
Figure 2 for Confidence-Based Model Selection: When to Take Shortcuts for Subpopulation Shifts
Figure 3 for Confidence-Based Model Selection: When to Take Shortcuts for Subpopulation Shifts
Figure 4 for Confidence-Based Model Selection: When to Take Shortcuts for Subpopulation Shifts

Effective machine learning models learn both robust features that directly determine the outcome of interest (e.g., an object with wheels is more likely to be a car), and shortcut features (e.g., an object on a road is more likely to be a car). The latter can be a source of error under distributional shift, when the correlations change at test-time. The prevailing sentiment in the robustness literature is to avoid such correlative shortcut features and learn robust predictors. However, while robust predictors perform better on worst-case distributional shifts, they often sacrifice accuracy on majority subpopulations. In this paper, we argue that shortcut features should not be entirely discarded. Instead, if we can identify the subpopulation to which an input belongs, we can adaptively choose among models with different strengths to achieve high performance on both majority and minority subpopulations. We propose COnfidence-baSed MOdel Selection (CosMoS), where we observe that model confidence can effectively guide model selection. Notably, CosMoS does not require any target labels or group annotations, either of which may be difficult to obtain or unavailable. We evaluate CosMoS on four datasets with spurious correlations, each with multiple test sets with varying levels of data distribution shift. We find that CosMoS achieves 2-5% lower average regret across all subpopulations, compared to using only robust predictors or other model aggregation methods.

* 15 pages, 5 figures 
Viaarxiv icon

Conservative Prediction via Data-Driven Confidence Minimization

Jun 08, 2023
Caroline Choi, Fahim Tajwar, Yoonho Lee, Huaxiu Yao, Ananya Kumar, Chelsea Finn

Figure 1 for Conservative Prediction via Data-Driven Confidence Minimization
Figure 2 for Conservative Prediction via Data-Driven Confidence Minimization
Figure 3 for Conservative Prediction via Data-Driven Confidence Minimization
Figure 4 for Conservative Prediction via Data-Driven Confidence Minimization

Errors of machine learning models are costly, especially in safety-critical domains such as healthcare, where such mistakes can prevent the deployment of machine learning altogether. In these settings, conservative models -- models which can defer to human judgment when they are likely to make an error -- may offer a solution. However, detecting unusual or difficult examples is notably challenging, as it is impossible to anticipate all potential inputs at test time. To address this issue, prior work has proposed to minimize the model's confidence on an auxiliary pseudo-OOD dataset. We theoretically analyze the effect of confidence minimization and show that the choice of auxiliary dataset is critical. Specifically, if the auxiliary dataset includes samples from the OOD region of interest, confidence minimization provably separates ID and OOD inputs by predictive confidence. Taking inspiration from this result, we present data-driven confidence minimization (DCM), which minimizes confidence on an uncertainty dataset containing examples that the model is likely to misclassify at test time. Our experiments show that DCM consistently outperforms state-of-the-art OOD detection methods on 8 ID-OOD dataset pairs, reducing FPR (at TPR 95%) by 6.3% and 58.1% on CIFAR-10 and CIFAR-100, and outperforms existing selective classification approaches on 4 datasets in conditions of distribution shift.

* Preprint. Under review 
Viaarxiv icon

Project and Probe: Sample-Efficient Domain Adaptation by Interpolating Orthogonal Features

Feb 10, 2023
Annie S. Chen, Yoonho Lee, Amrith Setlur, Sergey Levine, Chelsea Finn

Figure 1 for Project and Probe: Sample-Efficient Domain Adaptation by Interpolating Orthogonal Features
Figure 2 for Project and Probe: Sample-Efficient Domain Adaptation by Interpolating Orthogonal Features
Figure 3 for Project and Probe: Sample-Efficient Domain Adaptation by Interpolating Orthogonal Features
Figure 4 for Project and Probe: Sample-Efficient Domain Adaptation by Interpolating Orthogonal Features

Conventional approaches to robustness try to learn a model based on causal features. However, identifying maximally robust or causal features may be difficult in some scenarios, and in others, non-causal "shortcut" features may actually be more predictive. We propose a lightweight, sample-efficient approach that learns a diverse set of features and adapts to a target distribution by interpolating these features with a small target dataset. Our approach, Project and Probe (Pro$^2$), first learns a linear projection that maps a pre-trained embedding onto orthogonal directions while being predictive of labels in the source dataset. The goal of this step is to learn a variety of predictive features, so that at least some of them remain useful after distribution shift. Pro$^2$ then learns a linear classifier on top of these projected features using a small target dataset. We theoretically show that Pro$^2$ learns a projection matrix that is optimal for classification in an information-theoretic sense, resulting in better generalization due to a favorable bias-variance tradeoff. Our experiments on four datasets, with multiple distribution shift settings for each, show that Pro$^2$ improves performance by 5-15% when given limited target data compared to prior methods such as standard linear probing.

* 24 pages, 11 figures 
Viaarxiv icon

DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature

Jan 26, 2023
Eric Mitchell, Yoonho Lee, Alexander Khazatsky, Christopher D. Manning, Chelsea Finn

Figure 1 for DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature
Figure 2 for DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature
Figure 3 for DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature
Figure 4 for DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature

The fluency and factual knowledge of large language models (LLMs) heightens the need for corresponding systems to detect whether a piece of text is machine-written. For example, students may use LLMs to complete written assignments, leaving instructors unable to accurately assess student learning. In this paper, we first demonstrate that text sampled from an LLM tends to occupy negative curvature regions of the model's log probability function. Leveraging this observation, we then define a new curvature-based criterion for judging if a passage is generated from a given LLM. This approach, which we call DetectGPT, does not require training a separate classifier, collecting a dataset of real or generated passages, or explicitly watermarking generated text. It uses only log probabilities computed by the model of interest and random perturbations of the passage from another generic pre-trained language model (e.g, T5). We find DetectGPT is more discriminative than existing zero-shot methods for model sample detection, notably improving detection of fake news articles generated by 20B parameter GPT-NeoX from 0.81 AUROC for the strongest zero-shot baseline to 0.95 AUROC for DetectGPT. See https://ericmitchell.ai/detectgpt for code, data, and other project information.

* Project website at https://ericmitchell.ai/detectgpt 
Viaarxiv icon

Wild-Time: A Benchmark of in-the-Wild Distribution Shift over Time

Nov 25, 2022
Huaxiu Yao, Caroline Choi, Bochuan Cao, Yoonho Lee, Pang Wei Koh, Chelsea Finn

Figure 1 for Wild-Time: A Benchmark of in-the-Wild Distribution Shift over Time
Figure 2 for Wild-Time: A Benchmark of in-the-Wild Distribution Shift over Time
Figure 3 for Wild-Time: A Benchmark of in-the-Wild Distribution Shift over Time
Figure 4 for Wild-Time: A Benchmark of in-the-Wild Distribution Shift over Time

Distribution shift occurs when the test distribution differs from the training distribution, and it can considerably degrade performance of machine learning models deployed in the real world. Temporal shifts -- distribution shifts arising from the passage of time -- often occur gradually and have the additional structure of timestamp metadata. By leveraging timestamp metadata, models can potentially learn from trends in past distribution shifts and extrapolate into the future. While recent works have studied distribution shifts, temporal shifts remain underexplored. To address this gap, we curate Wild-Time, a benchmark of 5 datasets that reflect temporal distribution shifts arising in a variety of real-world applications, including patient prognosis and news classification. On these datasets, we systematically benchmark 13 prior approaches, including methods in domain generalization, continual learning, self-supervised learning, and ensemble learning. We use two evaluation strategies: evaluation with a fixed time split (Eval-Fix) and evaluation with a data stream (Eval-Stream). Eval-Fix, our primary evaluation strategy, aims to provide a simple evaluation protocol, while Eval-Stream is more realistic for certain real-world applications. Under both evaluation strategies, we observe an average performance drop of 20% from in-distribution to out-of-distribution data. Existing methods are unable to close this gap. Code is available at https://wild-time.github.io/.

* Accepted by NeurIPS 2022 Track on Datasets and Benchmarks 
Viaarxiv icon

Surgical Fine-Tuning Improves Adaptation to Distribution Shifts

Oct 20, 2022
Yoonho Lee, Annie S. Chen, Fahim Tajwar, Ananya Kumar, Huaxiu Yao, Percy Liang, Chelsea Finn

Figure 1 for Surgical Fine-Tuning Improves Adaptation to Distribution Shifts
Figure 2 for Surgical Fine-Tuning Improves Adaptation to Distribution Shifts
Figure 3 for Surgical Fine-Tuning Improves Adaptation to Distribution Shifts
Figure 4 for Surgical Fine-Tuning Improves Adaptation to Distribution Shifts

A common approach to transfer learning under distribution shift is to fine-tune the last few layers of a pre-trained model, preserving learned features while also adapting to the new task. This paper shows that in such settings, selectively fine-tuning a subset of layers (which we term surgical fine-tuning) matches or outperforms commonly used fine-tuning approaches. Moreover, the type of distribution shift influences which subset is more effective to tune: for example, for image corruptions, fine-tuning only the first few layers works best. We validate our findings systematically across seven real-world data tasks spanning three types of distribution shifts. Theoretically, we prove that for two-layer neural networks in an idealized setting, first-layer tuning can outperform fine-tuning all layers. Intuitively, fine-tuning more parameters on a small target dataset can cause information learned during pre-training to be forgotten, and the relevant information depends on the type of shift.

Viaarxiv icon

On Divergence Measures for Bayesian Pseudocoresets

Oct 12, 2022
Balhae Kim, Jungwon Choi, Seanie Lee, Yoonho Lee, Jung-Woo Ha, Juho Lee

Figure 1 for On Divergence Measures for Bayesian Pseudocoresets
Figure 2 for On Divergence Measures for Bayesian Pseudocoresets
Figure 3 for On Divergence Measures for Bayesian Pseudocoresets
Figure 4 for On Divergence Measures for Bayesian Pseudocoresets

A Bayesian pseudocoreset is a small synthetic dataset for which the posterior over parameters approximates that of the original dataset. While promising, the scalability of Bayesian pseudocoresets is not yet validated in realistic problems such as image classification with deep neural networks. On the other hand, dataset distillation methods similarly construct a small dataset such that the optimization using the synthetic dataset converges to a solution with performance competitive with optimization using full data. Although dataset distillation has been empirically verified in large-scale settings, the framework is restricted to point estimates, and their adaptation to Bayesian inference has not been explored. This paper casts two representative dataset distillation algorithms as approximations to methods for constructing pseudocoresets by minimizing specific divergence measures: reverse KL divergence and Wasserstein distance. Furthermore, we provide a unifying view of such divergence measures in Bayesian pseudocoreset construction. Finally, we propose a novel Bayesian pseudocoreset algorithm based on minimizing forward KL divergence. Our empirical results demonstrate that the pseudocoresets constructed from these methods reflect the true posterior even in high-dimensional Bayesian inference problems.

Viaarxiv icon

Diversify and Disambiguate: Learning From Underspecified Data

Feb 07, 2022
Yoonho Lee, Huaxiu Yao, Chelsea Finn

Figure 1 for Diversify and Disambiguate: Learning From Underspecified Data
Figure 2 for Diversify and Disambiguate: Learning From Underspecified Data
Figure 3 for Diversify and Disambiguate: Learning From Underspecified Data
Figure 4 for Diversify and Disambiguate: Learning From Underspecified Data

Many datasets are underspecified, which means there are several equally viable solutions for the data. Underspecified datasets can be problematic for methods that learn a single hypothesis because different functions that achieve low training loss can focus on different predictive features and thus have widely varying predictions on out-of-distribution data. We propose DivDis, a simple two-stage framework that first learns a diverse collection of hypotheses for a task by leveraging unlabeled data from the test distribution. We then disambiguate by selecting one of the discovered hypotheses using minimal additional supervision, in the form of additional labels or inspection of function visualization. We demonstrate the ability of DivDis to find hypotheses that use robust features in image classification and natural language processing problems with underspecification.

* Preprint 
Viaarxiv icon

Diversity Matters When Learning From Ensembles

Oct 27, 2021
Giung Nam, Jongmin Yoon, Yoonho Lee, Juho Lee

Figure 1 for Diversity Matters When Learning From Ensembles
Figure 2 for Diversity Matters When Learning From Ensembles
Figure 3 for Diversity Matters When Learning From Ensembles
Figure 4 for Diversity Matters When Learning From Ensembles

Deep ensembles excel in large-scale image classification tasks both in terms of prediction accuracy and calibration. Despite being simple to train, the computation and memory cost of deep ensembles limits their practicability. While some recent works propose to distill an ensemble model into a single model to reduce such costs, there is still a performance gap between the ensemble and distilled models. We propose a simple approach for reducing this gap, i.e., making the distilled performance close to the full ensemble. Our key assumption is that a distilled model should absorb as much function diversity inside the ensemble as possible. We first empirically show that the typical distillation procedure does not effectively transfer such diversity, especially for complex models that achieve near-zero training error. To fix this, we propose a perturbation strategy for distillation that reveals diversity by seeking inputs for which ensemble member outputs disagree. We empirically show that a model distilled with such perturbed samples indeed exhibits enhanced diversity, leading to improved performance.

* NeurIPS 2021 
Viaarxiv icon

On The Distribution of Penultimate Activations of Classification Networks

Jul 06, 2021
Minkyo Seo, Yoonho Lee, Suha Kwak

Figure 1 for On The Distribution of Penultimate Activations of Classification Networks
Figure 2 for On The Distribution of Penultimate Activations of Classification Networks
Figure 3 for On The Distribution of Penultimate Activations of Classification Networks
Figure 4 for On The Distribution of Penultimate Activations of Classification Networks

This paper studies probability distributions of penultimate activations of classification networks. We show that, when a classification network is trained with the cross-entropy loss, its final classification layer forms a Generative-Discriminative pair with a generative classifier based on a specific distribution of penultimate activations. More importantly, the distribution is parameterized by the weights of the final fully-connected layer, and can be considered as a generative model that synthesizes the penultimate activations without feeding input data. We empirically demonstrate that this generative model enables stable knowledge distillation in the presence of domain shift, and can transfer knowledge from a classifier to variational autoencoders and generative adversarial networks for class-conditional image generation.

* 8 pages, UAI 2021, The first two authors equally contributed 
Viaarxiv icon