Alert button
Picture for Hunter Lang

Hunter Lang

Alert button

Who Should Predict? Exact Algorithms For Learning to Defer to Humans

Jan 15, 2023
Hussein Mozannar, Hunter Lang, Dennis Wei, Prasanna Sattigeri, Subhro Das, David Sontag

Figure 1 for Who Should Predict? Exact Algorithms For Learning to Defer to Humans
Figure 2 for Who Should Predict? Exact Algorithms For Learning to Defer to Humans
Figure 3 for Who Should Predict? Exact Algorithms For Learning to Defer to Humans
Figure 4 for Who Should Predict? Exact Algorithms For Learning to Defer to Humans

Automated AI classifiers should be able to defer the prediction to a human decision maker to ensure more accurate predictions. In this work, we jointly train a classifier with a rejector, which decides on each data point whether the classifier or the human should predict. We show that prior approaches can fail to find a human-AI system with low misclassification error even when there exists a linear classifier and rejector that have zero error (the realizable setting). We prove that obtaining a linear pair with low error is NP-hard even when the problem is realizable. To complement this negative result, we give a mixed-integer-linear-programming (MILP) formulation that can optimally solve the problem in the linear setting. However, the MILP only scales to moderately-sized problems. Therefore, we provide a novel surrogate loss function that is realizable-consistent and performs well empirically. We test our approaches on a comprehensive set of datasets and compare to a wide range of baselines.

Viaarxiv icon

TabLLM: Few-shot Classification of Tabular Data with Large Language Models

Oct 19, 2022
Stefan Hegselmann, Alejandro Buendia, Hunter Lang, Monica Agrawal, Xiaoyi Jiang, David Sontag

Figure 1 for TabLLM: Few-shot Classification of Tabular Data with Large Language Models
Figure 2 for TabLLM: Few-shot Classification of Tabular Data with Large Language Models
Figure 3 for TabLLM: Few-shot Classification of Tabular Data with Large Language Models
Figure 4 for TabLLM: Few-shot Classification of Tabular Data with Large Language Models

We study the application of large language models to zero-shot and few-shot classification of tabular data. We prompt the large language model with a serialization of the tabular data to a natural-language string, together with a short description of the classification problem. In the few-shot setting, we fine-tune the large language model using some labeled examples. We evaluate several serialization methods including templates, table-to-text models, and large language models. Despite its simplicity, we find that this technique outperforms prior deep-learning-based tabular classification methods on several benchmark datasets. In most cases, even zero-shot classification obtains non-trivial performance, illustrating the method's ability to exploit prior knowledge encoded in large language models. Unlike many deep learning methods for tabular datasets, this approach is also competitive with strong traditional baselines like gradient-boosted trees, especially in the very-few-shot setting.

Viaarxiv icon

Training Subset Selection for Weak Supervision

Jun 06, 2022
Hunter Lang, Aravindan Vijayaraghavan, David Sontag

Figure 1 for Training Subset Selection for Weak Supervision
Figure 2 for Training Subset Selection for Weak Supervision
Figure 3 for Training Subset Selection for Weak Supervision
Figure 4 for Training Subset Selection for Weak Supervision

Existing weak supervision approaches use all the data covered by weak signals to train a classifier. We show both theoretically and empirically that this is not always optimal. Intuitively, there is a tradeoff between the amount of weakly-labeled data and the precision of the weak labels. We explore this tradeoff by combining pretrained data representations with the cut statistic (Muhlenbach et al., 2004) to select (hopefully) high-quality subsets of the weakly-labeled training data. Subset selection applies to any label model and classifier and is very simple to plug in to existing weak supervision pipelines, requiring just a few lines of code. We show our subset selection method improves the performance of weak supervision for a wide range of label models, classifiers, and datasets. Using less weakly-labeled data improves the accuracy of weak supervision pipelines by up to 19% (absolute) on benchmark tasks.

Viaarxiv icon

Large Language Models are Zero-Shot Clinical Information Extractors

May 25, 2022
Monica Agrawal, Stefan Hegselmann, Hunter Lang, Yoon Kim, David Sontag

Figure 1 for Large Language Models are Zero-Shot Clinical Information Extractors
Figure 2 for Large Language Models are Zero-Shot Clinical Information Extractors
Figure 3 for Large Language Models are Zero-Shot Clinical Information Extractors
Figure 4 for Large Language Models are Zero-Shot Clinical Information Extractors

We show that large language models, such as GPT-3, perform well at zero-shot information extraction from clinical text despite not being trained specifically for the clinical domain. We present several examples showing how to use these models as tools for the diverse tasks of (i) concept disambiguation, (ii) evidence extraction, (iii) coreference resolution, and (iv) concept extraction, all on clinical text. The key to good performance is the use of simple task-specific programs that map from the language model outputs to the label space of the task. We refer to these programs as resolvers, a generalization of the verbalizer, which defines a mapping between output tokens and a discrete label space. We show in our examples that good resolvers share common components (e.g., "safety checks" that ensure the language model outputs faithfully match the input data), and that the common patterns across tasks make resolvers lightweight and easy to create. To better evaluate these systems, we also introduce two new datasets for benchmarking zero-shot clinical information extraction based on manual relabeling of the CASI dataset (Moon et al., 2014) with labels for new tasks. On the clinical extraction tasks we studied, the GPT-3 + resolver systems significantly outperform existing zero- and few-shot baselines.

Viaarxiv icon

Co-training Improves Prompt-based Learning for Large Language Models

Feb 02, 2022
Hunter Lang, Monica Agrawal, Yoon Kim, David Sontag

Figure 1 for Co-training Improves Prompt-based Learning for Large Language Models
Figure 2 for Co-training Improves Prompt-based Learning for Large Language Models
Figure 3 for Co-training Improves Prompt-based Learning for Large Language Models
Figure 4 for Co-training Improves Prompt-based Learning for Large Language Models

We demonstrate that co-training (Blum & Mitchell, 1998) can improve the performance of prompt-based learning by using unlabeled data. While prompting has emerged as a promising paradigm for few-shot and zero-shot learning, it is often brittle and requires much larger models compared to the standard supervised setup. We find that co-training makes it possible to improve the original prompt model and at the same time learn a smaller, downstream task-specific model. In the case where we only have partial access to a prompt model (e.g., output probabilities from GPT-3 (Brown et al., 2020)) we learn a calibration model over the prompt outputs. When we have full access to the prompt model's gradients but full finetuning remains prohibitively expensive (e.g., T0 (Sanh et al., 2021)), we learn a set of soft prompt continuous vectors to iteratively update the prompt model. We find that models trained in this manner can significantly improve performance on challenging datasets where there is currently a large gap between prompt-based learning and fully-supervised models.

* 17 pages, 8 figures 
Viaarxiv icon

Leveraging Time Irreversibility with Order-Contrastive Pre-training

Nov 04, 2021
Monica Agrawal, Hunter Lang, Michael Offin, Lior Gazit, David Sontag

Figure 1 for Leveraging Time Irreversibility with Order-Contrastive Pre-training
Figure 2 for Leveraging Time Irreversibility with Order-Contrastive Pre-training
Figure 3 for Leveraging Time Irreversibility with Order-Contrastive Pre-training
Figure 4 for Leveraging Time Irreversibility with Order-Contrastive Pre-training

Label-scarce, high-dimensional domains such as healthcare present a challenge for modern machine learning techniques. To overcome the difficulties posed by a lack of labeled data, we explore an "order-contrastive" method for self-supervised pre-training on longitudinal data. We sample pairs of time segments, switch the order for half of them, and train a model to predict whether a given pair is in the correct order. Intuitively, the ordering task allows the model to attend to the least time-reversible features (for example, features that indicate progression of a chronic disease). The same features are often useful for downstream tasks of interest. To quantify this, we study a simple theoretical setting where we prove a finite-sample guarantee for the downstream error of a representation learned with order-contrastive pre-training. Empirically, in synthetic and longitudinal healthcare settings, we demonstrate the effectiveness of order-contrastive pre-training in the small-data regime over supervised learning and other self-supervised pre-training baselines. Our results indicate that pre-training methods designed for particular classes of distributions and downstream tasks can improve the performance of self-supervised learning.

Viaarxiv icon

Combining Probabilistic Logic and Deep Learning for Self-Supervised Learning

Jul 27, 2021
Hoifung Poon, Hai Wang, Hunter Lang

Figure 1 for Combining Probabilistic Logic and Deep Learning for Self-Supervised Learning
Figure 2 for Combining Probabilistic Logic and Deep Learning for Self-Supervised Learning
Figure 3 for Combining Probabilistic Logic and Deep Learning for Self-Supervised Learning
Figure 4 for Combining Probabilistic Logic and Deep Learning for Self-Supervised Learning

Deep learning has proven effective for various application tasks, but its applicability is limited by the reliance on annotated examples. Self-supervised learning has emerged as a promising direction to alleviate the supervision bottleneck, but existing work focuses on leveraging co-occurrences in unlabeled data for task-agnostic representation learning, as exemplified by masked language model pretraining. In this chapter, we explore task-specific self-supervision, which leverages domain knowledge to automatically annotate noisy training examples for end applications, either by introducing labeling functions for annotating individual instances, or by imposing constraints over interdependent label decisions. We first present deep probabilistic logic(DPL), which offers a unifying framework for task-specific self-supervision by composing probabilistic logic with deep learning. DPL represents unknown labels as latent variables and incorporates diverse self-supervision using probabilistic logic to train a deep neural network end-to-end using variational EM. Next, we present self-supervised self-supervision(S4), which adds to DPL the capability to learn new self-supervision automatically. Starting from an initial seed self-supervision, S4 iteratively uses the deep neural network to propose new self supervision. These are either added directly (a form of structured self-training) or verified by a human expert (as in feature-based active learning). Experiments on real-world applications such as biomedical machine reading and various text classification tasks show that task-specific self-supervision can effectively leverage domain expertise and often match the accuracy of supervised methods with a tiny fraction of human effort.

* Book chapter. arXiv admin note: substantial text overlap with arXiv:2012.12474, arXiv:1808.08485, arXiv:2008.12878 
Viaarxiv icon

Beyond Perturbation Stability: LP Recovery Guarantees for MAP Inference on Noisy Stable Instances

Feb 26, 2021
Hunter Lang, Aravind Reddy, David Sontag, Aravindan Vijayaraghavan

Figure 1 for Beyond Perturbation Stability: LP Recovery Guarantees for MAP Inference on Noisy Stable Instances
Figure 2 for Beyond Perturbation Stability: LP Recovery Guarantees for MAP Inference on Noisy Stable Instances
Figure 3 for Beyond Perturbation Stability: LP Recovery Guarantees for MAP Inference on Noisy Stable Instances
Figure 4 for Beyond Perturbation Stability: LP Recovery Guarantees for MAP Inference on Noisy Stable Instances

Several works have shown that perturbation stable instances of the MAP inference problem in Potts models can be solved exactly using a natural linear programming (LP) relaxation. However, most of these works give few (or no) guarantees for the LP solutions on instances that do not satisfy the relatively strict perturbation stability definitions. In this work, we go beyond these stability results by showing that the LP approximately recovers the MAP solution of a stable instance even after the instance is corrupted by noise. This "noisy stable" model realistically fits with practical MAP inference problems: we design an algorithm for finding "close" stable instances, and show that several real-world instances from computer vision have nearby instances that are perturbation stable. These results suggest a new theoretical explanation for the excellent performance of this LP relaxation in practice.

* 25 pages, 2 figures, 2 tables. To appear in AISTATS 2021 
Viaarxiv icon

Self-supervised self-supervision by combining deep learning and probabilistic logic

Dec 23, 2020
Hunter Lang, Hoifung Poon

Figure 1 for Self-supervised self-supervision by combining deep learning and probabilistic logic
Figure 2 for Self-supervised self-supervision by combining deep learning and probabilistic logic
Figure 3 for Self-supervised self-supervision by combining deep learning and probabilistic logic
Figure 4 for Self-supervised self-supervision by combining deep learning and probabilistic logic

Labeling training examples at scale is a perennial challenge in machine learning. Self-supervision methods compensate for the lack of direct supervision by leveraging prior knowledge to automatically generate noisy labeled examples. Deep probabilistic logic (DPL) is a unifying framework for self-supervised learning that represents unknown labels as latent variables and incorporates diverse self-supervision using probabilistic logic to train a deep neural network end-to-end using variational EM. While DPL is successful at combining pre-specified self-supervision, manually crafting self-supervision to attain high accuracy may still be tedious and challenging. In this paper, we propose Self-Supervised Self-Supervision (S4), which adds to DPL the capability to learn new self-supervision automatically. Starting from an initial "seed," S4 iteratively uses the deep neural network to propose new self supervision. These are either added directly (a form of structured self-training) or verified by a human expert (as in feature-based active learning). Experiments show that S4 is able to automatically propose accurate self-supervision and can often nearly match the accuracy of supervised methods with a tiny fraction of the human effort.

* 12 pages, 2 figures 
Viaarxiv icon

Graph cuts always find a global optimum (with a catch)

Nov 07, 2020
Hunter Lang, David Sontag, Aravindan Vijayaraghavan

Figure 1 for Graph cuts always find a global optimum (with a catch)
Figure 2 for Graph cuts always find a global optimum (with a catch)
Figure 3 for Graph cuts always find a global optimum (with a catch)

We prove that the alpha-expansion algorithm for MAP inference always returns a globally optimal assignment for Markov Random Fields with Potts pairwise potentials, with a catch: the returned assignment is only guaranteed to be optimal in a small perturbation of the original problem instance. In other words, all local minima with respect to expansion moves are global minima to slightly perturbed versions of the problem. On "real-world" instances, MAP assignments of small perturbations of the problem should be very similar to the MAP assignment(s) of the original problem instance. We design an algorithm that can certify whether this is the case in practice. On several MAP inference problem instances from computer vision, this algorithm certifies that MAP solutions to all of these perturbations are very close to solutions of the original instance. These results taken together give a cohesive explanation for the good performance of "graph cuts" algorithms in practice. Every local expansion minimum is a global minimum in a small perturbation of the problem, and all of these global minima are close to the original solution.

* 16 pages, 2 figures 
Viaarxiv icon