Alert button
Picture for Carey E. Priebe

Carey E. Priebe

Alert button

A Statistical Turing Test for Generative Models

Sep 16, 2023
Hayden Helm, Carey E. Priebe, Weiwei Yang

The emergence of human-like abilities of AI systems for content generation in domains such as text, audio, and vision has prompted the development of classifiers to determine whether content originated from a human or a machine. Implicit in these efforts is an assumption that the generation properties of a human are different from that of the machine. In this work, we provide a framework in the language of statistical pattern recognition that quantifies the difference between the distributions of human and machine-generated content conditioned on an evaluation context. We describe current methods in the context of the framework and demonstrate how to use the framework to evaluate the progression of generative models towards human-like capabilities, among many axes of analysis.

Viaarxiv icon

Gotta match 'em all: Solution diversification in graph matching matched filters

Sep 11, 2023
Zhirui Li, Ben Johnson, Daniel L. Sussman, Carey E. Priebe, Vince Lyzinski

Figure 1 for Gotta match 'em all: Solution diversification in graph matching matched filters
Figure 2 for Gotta match 'em all: Solution diversification in graph matching matched filters
Figure 3 for Gotta match 'em all: Solution diversification in graph matching matched filters
Figure 4 for Gotta match 'em all: Solution diversification in graph matching matched filters

We present a novel approach for finding multiple noisily embedded template graphs in a very large background graph. Our method builds upon the graph-matching-matched-filter technique proposed in Sussman et al., with the discovery of multiple diverse matchings being achieved by iteratively penalizing a suitable node-pair similarity matrix in the matched filter algorithm. In addition, we propose algorithmic speed-ups that greatly enhance the scalability of our matched-filter approach. We present theoretical justification of our methodology in the setting of correlated Erdos-Renyi graphs, showing its ability to sequentially discover multiple templates under mild model conditions. We additionally demonstrate our method's utility via extensive experiments both using simulated models and real-world dataset, include human brain connectomes and a large transactional knowledge base.

* 36 pages, 12 figures, 1 table 
Viaarxiv icon

Comparing Foundation Models using Data Kernels

May 18, 2023
Brandon Duderstadt, Hayden S. Helm, Carey E. Priebe

Figure 1 for Comparing Foundation Models using Data Kernels
Figure 2 for Comparing Foundation Models using Data Kernels
Figure 3 for Comparing Foundation Models using Data Kernels

Recent advances in self-supervised learning and neural network scaling have enabled the creation of large models, known as foundation models, which can be easily adapted to a wide range of downstream tasks. The current paradigm for comparing foundation models involves evaluating them with aggregate metrics on various benchmark datasets. This method of model comparison is heavily dependent on the chosen evaluation metric, which makes it unsuitable for situations where the ideal metric is either not obvious or unavailable. In this work, we present a methodology for directly comparing the embedding space geometry of foundation models, which facilitates model comparison without the need for an explicit evaluation metric. Our methodology is grounded in random graph theory and enables valid hypothesis testing of embedding similarity on a per-datum basis. Further, we demonstrate how our methodology can be extended to facilitate population level model comparison. In particular, we show how our framework can induce a manifold of models equipped with a distance function that correlates strongly with several downstream metrics. We remark on the utility of this population level model comparison as a first step towards a taxonomic science of foundation models.

Viaarxiv icon

Semisupervised regression in latent structure networks on unknown manifolds

May 04, 2023
Aranyak Acharyya, Joshua Agterberg, Michael W. Trosset, Youngser Park, Carey E. Priebe

Figure 1 for Semisupervised regression in latent structure networks on unknown manifolds
Figure 2 for Semisupervised regression in latent structure networks on unknown manifolds
Figure 3 for Semisupervised regression in latent structure networks on unknown manifolds
Figure 4 for Semisupervised regression in latent structure networks on unknown manifolds

Random graphs are increasingly becoming objects of interest for modeling networks in a wide range of applications. Latent position random graph models posit that each node is associated with a latent position vector, and that these vectors follow some geometric structure in the latent space. In this paper, we consider random dot product graphs, in which an edge is formed between two nodes with probability given by the inner product of their respective latent positions. We assume that the latent position vectors lie on an unknown one-dimensional curve and are coupled with a response covariate via a regression model. Using the geometry of the underlying latent position vectors, we propose a manifold learning and graph embedding technique to predict the response variable on out-of-sample nodes, and we establish convergence guarantees for these responses. Our theoretical results are supported by simulations and an application to Drosophila brain data.

Viaarxiv icon

Discovering Communication Pattern Shifts in Large-Scale Networks using Encoder Embedding and Vertex Dynamics

May 03, 2023
Cencheng Shen, Jonathan Larson, Ha Trinh, Xihan Qin, Youngser Park, Carey E. Priebe

Figure 1 for Discovering Communication Pattern Shifts in Large-Scale Networks using Encoder Embedding and Vertex Dynamics
Figure 2 for Discovering Communication Pattern Shifts in Large-Scale Networks using Encoder Embedding and Vertex Dynamics
Figure 3 for Discovering Communication Pattern Shifts in Large-Scale Networks using Encoder Embedding and Vertex Dynamics
Figure 4 for Discovering Communication Pattern Shifts in Large-Scale Networks using Encoder Embedding and Vertex Dynamics

The analysis of large-scale time-series network data, such as social media and email communications, remains a significant challenge for graph analysis methodology. In particular, the scalability of graph analysis is a critical issue hindering further progress in large-scale downstream inference. In this paper, we introduce a novel approach called "temporal encoder embedding" that can efficiently embed large amounts of graph data with linear complexity. We apply this method to an anonymized time-series communication network from a large organization spanning 2019-2020, consisting of over 100 thousand vertices and 80 million edges. Our method embeds the data within 10 seconds on a standard computer and enables the detection of communication pattern shifts for individual vertices, vertex communities, and the overall graph structure. Through supporting theory and synthesis studies, we demonstrate the theoretical soundness of our approach under random graph models and its numerical effectiveness through simulation studies.

* 25 pages main + 7 pages appendix 
Viaarxiv icon

Synergistic Graph Fusion via Encoder Embedding

Mar 31, 2023
Cencheng Shen, Carey E. Priebe, Jonathan Larson, Ha Trinh

Figure 1 for Synergistic Graph Fusion via Encoder Embedding
Figure 2 for Synergistic Graph Fusion via Encoder Embedding
Figure 3 for Synergistic Graph Fusion via Encoder Embedding
Figure 4 for Synergistic Graph Fusion via Encoder Embedding

In this paper, we introduce a novel approach to multi-graph embedding called graph fusion encoder embedding. The method is designed to work with multiple graphs that share a common vertex set. Under the supervised learning setting, we show that the resulting embedding exhibits a surprising yet highly desirable "synergistic effect": for sufficiently large vertex size, the vertex classification accuracy always benefits from additional graphs. We provide a mathematical proof of this effect under the stochastic block model, and identify the necessary and sufficient condition for asymptotically perfect classification. The simulations and real data experiments confirm the superiority of the proposed method, which consistently outperforms recent benchmark methods in classification.

* 17 pages main paper, 6 pages appendix 
Viaarxiv icon

Approximately optimal domain adaptation with Fisher's Linear Discriminant Analysis

Mar 14, 2023
Hayden S. Helm, Ashwin De Silva, Joshua T. Vogelstein, Carey E. Priebe, Weiwei Yang

Figure 1 for Approximately optimal domain adaptation with Fisher's Linear Discriminant Analysis
Figure 2 for Approximately optimal domain adaptation with Fisher's Linear Discriminant Analysis
Figure 3 for Approximately optimal domain adaptation with Fisher's Linear Discriminant Analysis
Figure 4 for Approximately optimal domain adaptation with Fisher's Linear Discriminant Analysis

We propose a class of models based on Fisher's Linear Discriminant (FLD) in the context of domain adaptation. The class is the convex combination of two hypotheses: i) an average hypothesis representing previously seen source tasks and ii) a hypothesis trained on a new target task. For a particular generative setting we derive the optimal convex combination of the two models under 0-1 loss, propose a computable approximation, and study the effect of various parameter settings on the relative risks between the optimal hypothesis, hypothesis i), and hypothesis ii). We demonstrate the effectiveness of the proposed optimal classifier in the context of EEG- and ECG-based classification settings and argue that the optimal classifier can be computed without access to direct information from any of the individual source tasks. We conclude by discussing further applications, limitations, and possible future directions.

Viaarxiv icon