Alert button
Picture for John P. Cunningham

John P. Cunningham

Alert button

Practical and Asymptotically Exact Conditional Sampling in Diffusion Models

Jun 30, 2023
Luhuan Wu, Brian L. Trippe, Christian A. Naesseth, David M. Blei, John P. Cunningham

Figure 1 for Practical and Asymptotically Exact Conditional Sampling in Diffusion Models
Figure 2 for Practical and Asymptotically Exact Conditional Sampling in Diffusion Models
Figure 3 for Practical and Asymptotically Exact Conditional Sampling in Diffusion Models
Figure 4 for Practical and Asymptotically Exact Conditional Sampling in Diffusion Models

Diffusion models have been successful on a range of conditional generation tasks including molecular design and text-to-image generation. However, these achievements have primarily depended on task-specific conditional training or error-prone heuristic approximations. Ideally, a conditional generation method should provide exact samples for a broad range of conditional distributions without requiring task-specific training. To this end, we introduce the Twisted Diffusion Sampler, or TDS. TDS is a sequential Monte Carlo (SMC) algorithm that targets the conditional distributions of diffusion models. The main idea is to use twisting, an SMC technique that enjoys good computational efficiency, to incorporate heuristic approximations without compromising asymptotic exactness. We first find in simulation and on MNIST image inpainting and class-conditional generation tasks that TDS provides a computational statistical trade-off, yielding more accurate approximations with many particles but with empirical improvements over heuristics with as few as two particles. We then turn to motif-scaffolding, a core task in protein design, using a TDS extension to Riemannian diffusion models. On benchmark test cases, TDS allows flexible conditioning criteria and often outperforms the state of the art.

* Code: https://github.com/blt2114/twisted_diffusion_sampler 
Viaarxiv icon

Pathologies of Predictive Diversity in Deep Ensembles

Feb 03, 2023
Taiga Abe, E. Kelly Buchanan, Geoff Pleiss, John P. Cunningham

Figure 1 for Pathologies of Predictive Diversity in Deep Ensembles
Figure 2 for Pathologies of Predictive Diversity in Deep Ensembles
Figure 3 for Pathologies of Predictive Diversity in Deep Ensembles
Figure 4 for Pathologies of Predictive Diversity in Deep Ensembles

Classical results establish that ensembles of small models benefit when predictive diversity is encouraged, through bagging, boosting, and similar. Here we demonstrate that this intuition does not carry over to ensembles of deep neural networks used for classification, and in fact the opposite can be true. Unlike regression models or small (unconfident) classifiers, predictions from large (confident) neural networks concentrate in vertices of the probability simplex. Thus, decorrelating these points necessarily moves the ensemble prediction away from vertices, harming confidence and moving points across decision boundaries. Through large scale experiments, we demonstrate that diversity-encouraging regularizers hurt the performance of high-capacity deep ensembles used for classification. Even more surprisingly, discouraging predictive diversity can be beneficial. Together this work strongly suggests that the best strategy for deep ensembles is utilizing more accurate, but likely less diverse, component models.

Viaarxiv icon

Posterior Collapse and Latent Variable Non-identifiability

Jan 02, 2023
Yixin Wang, David M. Blei, John P. Cunningham

Figure 1 for Posterior Collapse and Latent Variable Non-identifiability
Figure 2 for Posterior Collapse and Latent Variable Non-identifiability
Figure 3 for Posterior Collapse and Latent Variable Non-identifiability
Figure 4 for Posterior Collapse and Latent Variable Non-identifiability

Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.

* 19 pages, 4 figures; NeurIPS 2021 
Viaarxiv icon

Denoising Deep Generative Models

Dec 05, 2022
Gabriel Loaiza-Ganem, Brendan Leigh Ross, Luhuan Wu, John P. Cunningham, Jesse C. Cresswell, Anthony L. Caterini

Figure 1 for Denoising Deep Generative Models
Figure 2 for Denoising Deep Generative Models

Likelihood-based deep generative models have recently been shown to exhibit pathological behaviour under the manifold hypothesis as a consequence of using high-dimensional densities to model data with low-dimensional structure. In this paper we propose two methodologies aimed at addressing this problem. Both are based on adding Gaussian noise to the data to remove the dimensionality mismatch during training, and both provide a denoising mechanism whose goal is to sample from the model as though no noise had been added to the data. Our first approach is based on Tweedie's formula, and the second on models which take the variance of added noise as a conditional input. We show that surprisingly, while well motivated, these approaches only sporadically improve performance over not adding noise, and that other methods of addressing the dimensionality mismatch are more empirically adequate.

* NeurIPS 2022 ICBINB workshop (spotlight) 
Viaarxiv icon

Posterior and Computational Uncertainty in Gaussian Processes

May 30, 2022
Jonathan Wenger, Geoff Pleiss, Marvin Pförtner, Philipp Hennig, John P. Cunningham

Figure 1 for Posterior and Computational Uncertainty in Gaussian Processes
Figure 2 for Posterior and Computational Uncertainty in Gaussian Processes
Figure 3 for Posterior and Computational Uncertainty in Gaussian Processes
Figure 4 for Posterior and Computational Uncertainty in Gaussian Processes

Gaussian processes scale prohibitively with the size of the dataset. In response, many approximation methods have been developed, which inevitably introduce approximation error. This additional source of uncertainty, due to limited computation, is entirely ignored when using the approximate posterior. Therefore in practice, GP models are often as much about the approximation method as they are about the data. Here, we develop a new class of methods that provides consistent estimation of the combined uncertainty arising from both the finite number of data observed and the finite amount of computation expended. The most common GP approximations map to an instance in this class, such as methods based on the Cholesky factorization, conjugate gradients, and inducing points. For any method in this class, we prove (i) convergence of its posterior mean in the associated RKHS, (ii) decomposability of its combined posterior covariance into mathematical and computational covariances, and (iii) that the combined variance is a tight worst-case bound for the squared error between the method's posterior mean and the latent function. Finally, we empirically demonstrate the consequences of ignoring computational uncertainty and show how implicitly modeling it improves generalization performance on benchmark datasets.

Viaarxiv icon

Data Augmentation for Compositional Data: Advancing Predictive Models of the Microbiome

May 20, 2022
Elliott Gordon-Rodriguez, Thomas P. Quinn, John P. Cunningham

Figure 1 for Data Augmentation for Compositional Data: Advancing Predictive Models of the Microbiome
Figure 2 for Data Augmentation for Compositional Data: Advancing Predictive Models of the Microbiome
Figure 3 for Data Augmentation for Compositional Data: Advancing Predictive Models of the Microbiome
Figure 4 for Data Augmentation for Compositional Data: Advancing Predictive Models of the Microbiome

Data augmentation plays a key role in modern machine learning pipelines. While numerous augmentation strategies have been studied in the context of computer vision and natural language processing, less is known for other data modalities. Our work extends the success of data augmentation to compositional data, i.e., simplex-valued data, which is of particular interest in the context of the human microbiome. Drawing on key principles from compositional data analysis, such as the Aitchison geometry of the simplex and subcompositions, we define novel augmentation strategies for this data modality. Incorporating our data augmentations into standard supervised learning pipelines results in consistent performance gains across a wide range of standard benchmark datasets. In particular, we set a new state-of-the-art for key disease prediction tasks including colorectal cancer, type 2 diabetes, and Crohn's disease. In addition, our data augmentations enable us to define a novel contrastive learning model, which improves on previous representation learning approaches for microbiome compositional data. Our code is available at https://github.com/cunningham-lab/AugCoDa.

Viaarxiv icon

On the Normalizing Constant of the Continuous Categorical Distribution

Apr 28, 2022
Elliott Gordon-Rodriguez, Gabriel Loaiza-Ganem, Andres Potapczynski, John P. Cunningham

Figure 1 for On the Normalizing Constant of the Continuous Categorical Distribution
Figure 2 for On the Normalizing Constant of the Continuous Categorical Distribution

Probability distributions supported on the simplex enjoy a wide range of applications across statistics and machine learning. Recently, a novel family of such distributions has been discovered: the continuous categorical. This family enjoys remarkable mathematical simplicity; its density function resembles that of the Dirichlet distribution, but with a normalizing constant that can be written in closed form using elementary functions only. In spite of this mathematical simplicity, our understanding of the normalizing constant remains far from complete. In this work, we characterize the numerical behavior of the normalizing constant and we present theoretical and methodological advances that can, in turn, help to enable broader applications of the continuous categorical distribution. Our code is available at https://github.com/cunningham-lab/cb_and_cc/.

Viaarxiv icon

Deep Ensembles Work, But Are They Necessary?

Feb 14, 2022
Taiga Abe, E. Kelly Buchanan, Geoff Pleiss, Richard Zemel, John P. Cunningham

Figure 1 for Deep Ensembles Work, But Are They Necessary?
Figure 2 for Deep Ensembles Work, But Are They Necessary?
Figure 3 for Deep Ensembles Work, But Are They Necessary?
Figure 4 for Deep Ensembles Work, But Are They Necessary?

Ensembling neural networks is an effective way to increase accuracy, and can often match the performance of larger models. This observation poses a natural question: given the choice between a deep ensemble and a single neural network with similar accuracy, is one preferable over the other? Recent work suggests that deep ensembles may offer benefits beyond predictive power: namely, uncertainty quantification and robustness to dataset shift. In this work, we demonstrate limitations to these purported benefits, and show that a single (but larger) neural network can replicate these qualities. First, we show that ensemble diversity, by any metric, does not meaningfully contribute to an ensemble's ability to detect out-of-distribution (OOD) data, and that one can estimate ensemble diversity by measuring the relative improvement of a single larger model. Second, we show that the OOD performance afforded by ensembles is strongly determined by their in-distribution (InD) performance, and -- in this sense -- is not indicative of any "effective robustness". While deep ensembles are a practical way to achieve performance improvement (in agreement with prior work), our results show that they may be a tool of convenience rather than a fundamentally better model class.

Viaarxiv icon