Alert button
Picture for Jeffrey Heer

Jeffrey Heer

Alert button

ScatterShot: Interactive In-context Example Curation for Text Transformation

Feb 14, 2023
Tongshuang Wu, Hua Shen, Daniel S. Weld, Jeffrey Heer, Marco Tulio Ribeiro

Figure 1 for ScatterShot: Interactive In-context Example Curation for Text Transformation
Figure 2 for ScatterShot: Interactive In-context Example Curation for Text Transformation
Figure 3 for ScatterShot: Interactive In-context Example Curation for Text Transformation
Figure 4 for ScatterShot: Interactive In-context Example Curation for Text Transformation

The in-context learning capabilities of LLMs like GPT-3 allow annotators to customize an LLM to their specific tasks with a small number of examples. However, users tend to include only the most obvious patterns when crafting examples, resulting in underspecified in-context functions that fall short on unseen cases. Further, it is hard to know when "enough" examples have been included even for known patterns. In this work, we present ScatterShot, an interactive system for building high-quality demonstration sets for in-context learning. ScatterShot iteratively slices unlabeled data into task-specific patterns, samples informative inputs from underexplored or not-yet-saturated slices in an active learning manner, and helps users label more efficiently with the help of an LLM and the current example set. In simulation studies on two text perturbation scenarios, ScatterShot sampling improves the resulting few-shot functions by 4-5 percentage points over random sampling, with less variance as more examples are added. In a user study, ScatterShot greatly helps users in covering different patterns in the input space and labeling in-context examples more efficiently, resulting in better in-context learning and less user effort.

* IUI 2023: 28th International Conference on Intelligent User Interfaces 
Viaarxiv icon

Tisane: Authoring Statistical Models via Formal Reasoning from Conceptual and Data Relationships

Jan 07, 2022
Eunice Jun, Audrey Seo, Jeffrey Heer, René Just

Figure 1 for Tisane: Authoring Statistical Models via Formal Reasoning from Conceptual and Data Relationships
Figure 2 for Tisane: Authoring Statistical Models via Formal Reasoning from Conceptual and Data Relationships
Figure 3 for Tisane: Authoring Statistical Models via Formal Reasoning from Conceptual and Data Relationships
Figure 4 for Tisane: Authoring Statistical Models via Formal Reasoning from Conceptual and Data Relationships

Proper statistical modeling incorporates domain theory about how concepts relate and details of how data were measured. However, data analysts currently lack tool support for recording and reasoning about domain assumptions, data collection, and modeling choices in an integrated manner, leading to mistakes that can compromise scientific validity. For instance, generalized linear mixed-effects models (GLMMs) help answer complex research questions, but omitting random effects impairs the generalizability of results. To address this need, we present Tisane, a mixed-initiative system for authoring generalized linear models with and without mixed-effects. Tisane introduces a study design specification language for expressing and asking questions about relationships between variables. Tisane contributes an interactive compilation process that represents relationships in a graph, infers candidate statistical models, and asks follow-up questions to disambiguate user queries to construct a valid model. In case studies with three researchers, we find that Tisane helps them focus on their goals and assumptions while avoiding past mistakes.

Viaarxiv icon

Polyjuice: Automated, General-purpose Counterfactual Generation

Jan 01, 2021
Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, Daniel S. Weld

Figure 1 for Polyjuice: Automated, General-purpose Counterfactual Generation
Figure 2 for Polyjuice: Automated, General-purpose Counterfactual Generation
Figure 3 for Polyjuice: Automated, General-purpose Counterfactual Generation
Figure 4 for Polyjuice: Automated, General-purpose Counterfactual Generation

Counterfactual examples have been shown to be useful for many applications, including calibrating, evaluating, and explaining model decision boundaries. However, previous methods for generating such counterfactual examples have been tightly tailored to a specific application, used a limited range of linguistic patterns, or are hard to scale. We propose to disentangle counterfactual generation from its use cases, i.e., gather general-purpose counterfactuals first, and then select them for specific applications. We frame the automated counterfactual generation as text generation, and finetune GPT-2 into a generator, Polyjuice, which produces fluent and diverse counterfactuals. Our method also allows control over where perturbations happen and what they do. We show Polyjuice supports multiple use cases: by generating diverse counterfactuals for humans to label, Polyjuice helps produce high-quality datasets for model training and evaluation, requiring 40% less human effort. When used to generate explanations, Polyjuice helps augment feature attribution methods to reveal models' erroneous behaviors.

Viaarxiv icon

CORAL: COde RepresentAtion Learning with Weakly-Supervised Transformers for Analyzing Data Analysis

Aug 28, 2020
Ge Zhang, Mike A. Merrill, Yang Liu, Jeffrey Heer, Tim Althoff

Figure 1 for CORAL: COde RepresentAtion Learning with Weakly-Supervised Transformers for Analyzing Data Analysis
Figure 2 for CORAL: COde RepresentAtion Learning with Weakly-Supervised Transformers for Analyzing Data Analysis
Figure 3 for CORAL: COde RepresentAtion Learning with Weakly-Supervised Transformers for Analyzing Data Analysis
Figure 4 for CORAL: COde RepresentAtion Learning with Weakly-Supervised Transformers for Analyzing Data Analysis

Large scale analysis of source code, and in particular scientific source code, holds the promise of better understanding the data science process, identifying analytical best practices, and providing insights to the builders of scientific toolkits. However, large corpora have remained unanalyzed in depth, as descriptive labels are absent and require expert domain knowledge to generate. We propose a novel weakly supervised transformer-based architecture for computing joint representations of code from both abstract syntax trees and surrounding natural language comments. We then evaluate the model on a new classification task for labeling computational notebook cells as stages in the data analysis process from data import to wrangling, exploration, modeling, and evaluation. We show that our model, leveraging only easily-available weak supervision, achieves a 38% increase in accuracy over expert-supplied heuristics and outperforms a suite of baselines. Our model enables us to examine a set of 118,000 Jupyter Notebooks to uncover common data analysis patterns. Focusing on notebooks with relationships to academic articles, we conduct the largest ever study of scientific code and find that notebook composition correlates with the citation count of corresponding papers.

Viaarxiv icon