Alert button
Picture for Christopher Beckham

Christopher Beckham

Alert button

Conservative objective models are a special kind of contrastive divergence-based energy model

Apr 07, 2023
Christopher Beckham, Christopher Pal

Figure 1 for Conservative objective models are a special kind of contrastive divergence-based energy model
Figure 2 for Conservative objective models are a special kind of contrastive divergence-based energy model
Figure 3 for Conservative objective models are a special kind of contrastive divergence-based energy model
Figure 4 for Conservative objective models are a special kind of contrastive divergence-based energy model

In this work we theoretically show that conservative objective models (COMs) for offline model-based optimisation (MBO) are a special kind of contrastive divergence-based energy model, one where the energy function represents both the unconditional probability of the input and the conditional probability of the reward variable. While the initial formulation only samples modes from its learned distribution, we propose a simple fix that replaces its gradient ascent sampler with a Langevin MCMC sampler. This gives rise to a special probabilistic model where the probability of sampling an input is proportional to its predicted reward. Lastly, we show that better samples can be obtained if the model is decoupled so that the unconditional and conditional probabilities are modelled separately.

Viaarxiv icon

Score-based Diffusion Models in Function Space

Feb 14, 2023
Jae Hyun Lim, Nikola B. Kovachki, Ricardo Baptista, Christopher Beckham, Kamyar Azizzadenesheli, Jean Kossaifi, Vikram Voleti, Jiaming Song, Karsten Kreis, Jan Kautz, Christopher Pal, Arash Vahdat, Anima Anandkumar

Figure 1 for Score-based Diffusion Models in Function Space
Figure 2 for Score-based Diffusion Models in Function Space
Figure 3 for Score-based Diffusion Models in Function Space
Figure 4 for Score-based Diffusion Models in Function Space

Diffusion models have recently emerged as a powerful framework for generative modeling. They consist of a forward process that perturbs input data with Gaussian white noise and a reverse process that learns a score function to generate samples by denoising. Despite their tremendous success, they are mostly formulated on finite-dimensional spaces, e.g. Euclidean, limiting their applications to many domains where the data has a functional form such as in scientific computing and 3D geometric data analysis. In this work, we introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space. In DDOs, the forward process perturbs input functions gradually using a Gaussian process. The generative process is formulated by integrating a function-valued Langevin dynamic. Our approach requires an appropriate notion of the score for the perturbed data distribution, which we obtain by generalizing denoising score matching to function spaces that can be infinite-dimensional. We show that the corresponding discretized algorithm generates accurate samples at a fixed cost that is independent of the data resolution. We theoretically and numerically verify the applicability of our approach on a set of problems, including generating solutions to the Navier-Stokes equation viewed as the push-forward distribution of forcings from a Gaussian Random Field (GRF).

* 26 pages, 7 figures 
Viaarxiv icon

Visual Question Answering From Another Perspective: CLEVR Mental Rotation Tests

Dec 03, 2022
Christopher Beckham, Martin Weiss, Florian Golemo, Sina Honari, Derek Nowrouzezahrai, Christopher Pal

Figure 1 for Visual Question Answering From Another Perspective: CLEVR Mental Rotation Tests
Figure 2 for Visual Question Answering From Another Perspective: CLEVR Mental Rotation Tests
Figure 3 for Visual Question Answering From Another Perspective: CLEVR Mental Rotation Tests
Figure 4 for Visual Question Answering From Another Perspective: CLEVR Mental Rotation Tests

Different types of mental rotation tests have been used extensively in psychology to understand human visual reasoning and perception. Understanding what an object or visual scene would look like from another viewpoint is a challenging problem that is made even harder if it must be performed from a single image. We explore a controlled setting whereby questions are posed about the properties of a scene if that scene was observed from another viewpoint. To do this we have created a new version of the CLEVR dataset that we call CLEVR Mental Rotation Tests (CLEVR-MRT). Using CLEVR-MRT we examine standard methods, show how they fall short, then explore novel neural architectures that involve inferring volumetric representations of a scene. These volumes can be manipulated via camera-conditioned transformations to answer the question. We examine the efficacy of different model variants through rigorous ablations and demonstrate the efficacy of volumetric representations.

* Accepted for publication to Pattern Recognition journal 
Viaarxiv icon

Towards good validation metrics for generative models in offline model-based optimisation

Nov 19, 2022
Christopher Beckham, Alexandre Piche, David Vazquez, Christopher Pal

Figure 1 for Towards good validation metrics for generative models in offline model-based optimisation
Figure 2 for Towards good validation metrics for generative models in offline model-based optimisation
Figure 3 for Towards good validation metrics for generative models in offline model-based optimisation
Figure 4 for Towards good validation metrics for generative models in offline model-based optimisation

In this work we propose a principled evaluation framework for model-based optimisation to measure how well a generative model can extrapolate. We achieve this by interpreting the training and validation splits as draws from their respective `truncated' ground truth distributions, where examples in the validation set contain scores much larger than those in the training set. Model selection is performed on the validation set for some prescribed validation metric. A major research question however is in determining what validation metric correlates best with the expected value of generated candidates with respect to the ground truth oracle; work towards answering this question can translate to large economic gains since it is expensive to evaluate the ground truth oracle in the real world. We compare various validation metrics for generative adversarial networks using our framework. We also discuss limitations with our framework with respect to existing datasets and how progress can be made to mitigate them.

Viaarxiv icon

Challenges in leveraging GANs for few-shot data augmentation

Mar 30, 2022
Christopher Beckham, Issam Laradji, Pau Rodriguez, David Vazquez, Derek Nowrouzezahrai, Christopher Pal

Figure 1 for Challenges in leveraging GANs for few-shot data augmentation
Figure 2 for Challenges in leveraging GANs for few-shot data augmentation
Figure 3 for Challenges in leveraging GANs for few-shot data augmentation
Figure 4 for Challenges in leveraging GANs for few-shot data augmentation

In this paper, we explore the use of GAN-based few-shot data augmentation as a method to improve few-shot classification performance. We perform an exploration into how a GAN can be fine-tuned for such a task (one of which is in a class-incremental manner), as well as a rigorous empirical investigation into how well these models can perform to improve few-shot classification. We identify issues related to the difficulty of training such generative models under a purely supervised regime with very few examples, as well as issues regarding the evaluation protocols of existing works. We also find that in this regime, classification accuracy is highly sensitive to how the classes of the dataset are randomly split. Therefore, we propose a semi-supervised fine-tuning approach as a more pragmatic way forward to address these problems.

Viaarxiv icon

Adversarial Mixup Resynthesizers

Apr 04, 2019
Christopher Beckham, Sina Honari, Alex Lamb, Vikas Verma, Farnoosh Ghadiri, R Devon Hjelm, Christopher Pal

Figure 1 for Adversarial Mixup Resynthesizers
Figure 2 for Adversarial Mixup Resynthesizers
Figure 3 for Adversarial Mixup Resynthesizers
Figure 4 for Adversarial Mixup Resynthesizers

In this paper, we explore new approaches to combining information encoded within the learned representations of autoencoders. We explore models that are capable of combining the attributes of multiple inputs such that a resynthesised output is trained to fool an adversarial discriminator for real versus synthesised data. Furthermore, we explore the use of such an architecture in the context of semi-supervised learning, where we learn a mixing function whose objective is to produce interpolations of hidden states, or masked combinations of latent representations that are consistent with a conditioned class label. We show quantitative and qualitative evidence that such a formulation is an interesting avenue of research.

Viaarxiv icon

Manifold Mixup: Learning Better Representations by Interpolating Hidden States

Oct 04, 2018
Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Aaron Courville, Ioannis Mitliagkas, Yoshua Bengio

Figure 1 for Manifold Mixup: Learning Better Representations by Interpolating Hidden States
Figure 2 for Manifold Mixup: Learning Better Representations by Interpolating Hidden States
Figure 3 for Manifold Mixup: Learning Better Representations by Interpolating Hidden States
Figure 4 for Manifold Mixup: Learning Better Representations by Interpolating Hidden States

Deep networks often perform well on the data distribution on which they are trained, yet give incorrect (and often very confident) answers when evaluated on points from off of the training distribution. This is exemplified by the adversarial examples phenomenon but can also be seen in terms of model generalization and domain shift. Ideally, a model would assign lower confidence to points unlike those from the training distribution. We propose a regularizer which addresses this issue by training with interpolated hidden states and encouraging the classifier to be less confident at these points. Because the hidden states are learned, this has an important effect of encouraging the hidden states for a class to be concentrated in such a way so that interpolations within the same class or between two different classes do not intersect with the real data points from other classes. This has a major advantage in that it avoids the underfitting which can result from interpolating in the input space. We prove that the exact condition for this problem of underfitting to be avoided by Manifold Mixup is that the dimensionality of the hidden states exceeds the number of classes, which is often the case in practice. Additionally, this concentration can be seen as making the features in earlier layers more discriminative. We show that despite requiring no significant additional computation, Manifold Mixup achieves large improvements over strong baselines in supervised learning, robustness to single-step adversarial attacks, semi-supervised learning, and Negative Log-Likelihood on held out samples.

* ICLR2019 Under Review 
Viaarxiv icon

Unsupervised Depth Estimation, 3D Face Rotation and Replacement

Oct 01, 2018
Joel Ruben Antony Moniz, Christopher Beckham, Simon Rajotte, Sina Honari, Christopher Pal

Figure 1 for Unsupervised Depth Estimation, 3D Face Rotation and Replacement
Figure 2 for Unsupervised Depth Estimation, 3D Face Rotation and Replacement
Figure 3 for Unsupervised Depth Estimation, 3D Face Rotation and Replacement
Figure 4 for Unsupervised Depth Estimation, 3D Face Rotation and Replacement

We present an unsupervised approach for learning to estimate three dimensional (3D) facial structure from a single image while also predicting 3D viewpoint transformations that match a desired pose and facial geometry. We achieve this by inferring the depth of facial keypoints of an input image in an unsupervised manner, without using any form of ground-truth depth information. We show how it is possible to use these depths as intermediate computations within a new backpropable loss to predict the parameters of a 3D affine transformation matrix that maps inferred 3D keypoints of an input face to the corresponding 2D keypoints on a desired target facial geometry or pose. Our resulting approach can therefore be used to infer plausible 3D transformations from one face pose to another, allowing faces to be frontalized, transformed into 3D models or even warped to another pose and facial geometry. Lastly, we identify certain shortcomings with our formulation, and explore adversarial image translation techniques as a post-processing step to re-synthesize complete head shots for faces re-targeted to different poses or identities.

* 32nd Conference on Neural Information Processing Systems (NIPS 2018)  
* Depth Estimation, Face Rotation, Face Swap 
Viaarxiv icon

ExtremeWeather: A large-scale climate dataset for semi-supervised detection, localization, and understanding of extreme weather events

Nov 25, 2017
Evan Racah, Christopher Beckham, Tegan Maharaj, Samira Ebrahimi Kahou, Prabhat, Christopher Pal

Figure 1 for ExtremeWeather: A large-scale climate dataset for semi-supervised detection, localization, and understanding of extreme weather events
Figure 2 for ExtremeWeather: A large-scale climate dataset for semi-supervised detection, localization, and understanding of extreme weather events
Figure 3 for ExtremeWeather: A large-scale climate dataset for semi-supervised detection, localization, and understanding of extreme weather events
Figure 4 for ExtremeWeather: A large-scale climate dataset for semi-supervised detection, localization, and understanding of extreme weather events

Then detection and identification of extreme weather events in large-scale climate simulations is an important problem for risk management, informing governmental policy decisions and advancing our basic understanding of the climate system. Recent work has shown that fully supervised convolutional neural networks (CNNs) can yield acceptable accuracy for classifying well-known types of extreme weather events when large amounts of labeled data are available. However, many different types of spatially localized climate patterns are of interest including hurricanes, extra-tropical cyclones, weather fronts, and blocking events among others. Existing labeled data for these patterns can be incomplete in various ways, such as covering only certain years or geographic areas and having false negatives. This type of climate data therefore poses a number of interesting machine learning challenges. We present a multichannel spatiotemporal CNN architecture for semi-supervised bounding box prediction and exploratory data analysis. We demonstrate that our approach is able to leverage temporal information and unlabeled data to improve the localization of extreme weather events. Further, we explore the representations learned by our model in order to better understand this important data. We present a dataset, ExtremeWeather, to encourage machine learning research in this area and to help facilitate further work in understanding and mitigating the effects of climate change. The dataset is available at extremeweatherdataset.github.io and the code is available at https://github.com/eracah/hur-detect.

Viaarxiv icon

A step towards procedural terrain generation with GANs

Jul 11, 2017
Christopher Beckham, Christopher Pal

Figure 1 for A step towards procedural terrain generation with GANs
Figure 2 for A step towards procedural terrain generation with GANs
Figure 3 for A step towards procedural terrain generation with GANs
Figure 4 for A step towards procedural terrain generation with GANs

Procedural terrain generation for video games has been traditionally been done with smartly designed but handcrafted algorithms that generate heightmaps. We propose a first step toward the learning and synthesis of these using recent advances in deep generative modelling with openly available satellite imagery from NASA.

Viaarxiv icon