Alert button
Picture for Emilien Dupont

Emilien Dupont

Alert button

Deep Stochastic Processes via Functional Markov Transition Operators

May 24, 2023
Jin Xu, Emilien Dupont, Kaspar Märtens, Tom Rainforth, Yee Whye Teh

Figure 1 for Deep Stochastic Processes via Functional Markov Transition Operators
Figure 2 for Deep Stochastic Processes via Functional Markov Transition Operators
Figure 3 for Deep Stochastic Processes via Functional Markov Transition Operators
Figure 4 for Deep Stochastic Processes via Functional Markov Transition Operators

We introduce Markov Neural Processes (MNPs), a new class of Stochastic Processes (SPs) which are constructed by stacking sequences of neural parameterised Markov transition operators in function space. We prove that these Markov transition operators can preserve the exchangeability and consistency of SPs. Therefore, the proposed iterative construction adds substantial flexibility and expressivity to the original framework of Neural Processes (NPs) without compromising consistency or adding restrictions. Our experiments demonstrate clear advantages of MNPs over baseline models on a variety of tasks.

* 18 pages, 5 figures 
Viaarxiv icon

Spatial Functa: Scaling Functa to ImageNet Classification and Generation

Feb 09, 2023
Matthias Bauer, Emilien Dupont, Andy Brock, Dan Rosenbaum, Jonathan Richard Schwarz, Hyunjik Kim

Figure 1 for Spatial Functa: Scaling Functa to ImageNet Classification and Generation
Figure 2 for Spatial Functa: Scaling Functa to ImageNet Classification and Generation
Figure 3 for Spatial Functa: Scaling Functa to ImageNet Classification and Generation
Figure 4 for Spatial Functa: Scaling Functa to ImageNet Classification and Generation

Neural fields, also known as implicit neural representations, have emerged as a powerful means to represent complex signals of various modalities. Based on this Dupont et al. (2022) introduce a framework that views neural fields as data, termed *functa*, and proposes to do deep learning directly on this dataset of neural fields. In this work, we show that the proposed framework faces limitations when scaling up to even moderately complex datasets such as CIFAR-10. We then propose *spatial functa*, which overcome these limitations by using spatially arranged latent representations of neural fields, thereby allowing us to scale up the approach to ImageNet-1k at 256x256 resolution. We demonstrate competitive performance to Vision Transformers (Steiner et al., 2022) on classification and Latent Diffusion (Rombach et al., 2022) on image generation respectively.

Viaarxiv icon

COIN++: Data Agnostic Neural Compression

Jan 30, 2022
Emilien Dupont, Hrushikesh Loya, Milad Alizadeh, Adam Goliński, Yee Whye Teh, Arnaud Doucet

Figure 1 for COIN++: Data Agnostic Neural Compression
Figure 2 for COIN++: Data Agnostic Neural Compression
Figure 3 for COIN++: Data Agnostic Neural Compression
Figure 4 for COIN++: Data Agnostic Neural Compression

Neural compression algorithms are typically based on autoencoders that require specialized encoder and decoder architectures for different data modalities. In this paper, we propose COIN++, a neural compression framework that seamlessly handles a wide range of data modalities. Our approach is based on converting data to implicit neural representations, i.e. neural functions that map coordinates (such as pixel locations) to features (such as RGB values). Then, instead of storing the weights of the implicit neural representation directly, we store modulations applied to a meta-learned base network as a compressed code for the data. We further quantize and entropy code these modulations, leading to large compression gains while reducing encoding time by two orders of magnitude compared to baselines. We empirically demonstrate the effectiveness of our method by compressing various data modalities, from images to medical and climate data.

Viaarxiv icon

From data to functa: Your data point is a function and you should treat it like one

Jan 28, 2022
Emilien Dupont, Hyunjik Kim, S. M. Ali Eslami, Danilo Rezende, Dan Rosenbaum

Figure 1 for From data to functa: Your data point is a function and you should treat it like one
Figure 2 for From data to functa: Your data point is a function and you should treat it like one
Figure 3 for From data to functa: Your data point is a function and you should treat it like one
Figure 4 for From data to functa: Your data point is a function and you should treat it like one

It is common practice in deep learning to represent a measurement of the world on a discrete grid, e.g. a 2D grid of pixels. However, the underlying signal represented by these measurements is often continuous, e.g. the scene depicted in an image. A powerful continuous alternative is then to represent these measurements using an implicit neural representation, a neural function trained to output the appropriate measurement value for any input spatial location. In this paper, we take this idea to its next level: what would it take to perform deep learning on these functions instead, treating them as data? In this context we refer to the data as functa, and propose a framework for deep learning on functa. This view presents a number of challenges around efficient conversion from data to functa, compact representation of functa, and effectively solving downstream tasks on functa. We outline a recipe to overcome these challenges and apply it to a wide range of data modalities including images, 3D shapes, neural radiance fields (NeRF) and data on manifolds. We demonstrate that this approach has various compelling properties across data modalities, in particular on the canonical tasks of generative modeling, data imputation, novel view synthesis and classification.

Viaarxiv icon

COIN: COmpression with Implicit Neural representations

Mar 03, 2021
Emilien Dupont, Adam Goliński, Milad Alizadeh, Yee Whye Teh, Arnaud Doucet

Figure 1 for COIN: COmpression with Implicit Neural representations
Figure 2 for COIN: COmpression with Implicit Neural representations
Figure 3 for COIN: COmpression with Implicit Neural representations
Figure 4 for COIN: COmpression with Implicit Neural representations

We propose a new simple approach for image compression: instead of storing the RGB values for each pixel of an image, we store the weights of a neural network overfitted to the image. Specifically, to encode an image, we fit it with an MLP which maps pixel locations to RGB values. We then quantize and store the weights of this MLP as a code for the image. To decode the image, we simply evaluate the MLP at every pixel location. We found that this simple approach outperforms JPEG at low bit-rates, even without entropy coding or learning a distribution over weights. While our framework is not yet competitive with state of the art compression methods, we show that it has various attractive properties which could make it a viable alternative to other neural data compression approaches.

Viaarxiv icon

Generative Models as Distributions of Functions

Feb 09, 2021
Emilien Dupont, Yee Whye Teh, Arnaud Doucet

Figure 1 for Generative Models as Distributions of Functions
Figure 2 for Generative Models as Distributions of Functions
Figure 3 for Generative Models as Distributions of Functions
Figure 4 for Generative Models as Distributions of Functions

Generative models are typically trained on grid-like data such as images. As a result, the size of these models usually scales directly with the underlying grid resolution. In this paper, we abandon discretized grids and instead parameterize individual data points by continuous functions. We then build generative models by learning distributions over such functions. By treating data points as functions, we can abstract away from the specific type of data we train on and construct models that scale independently of signal resolution and dimension. To train our model, we use an adversarial approach with a discriminator that acts directly on continuous signals. Through experiments on both images and 3D shapes, we demonstrate that our model can learn rich distributions of functions independently of data type and resolution.

Viaarxiv icon

LieTransformer: Equivariant self-attention for Lie Groups

Dec 20, 2020
Michael Hutchinson, Charline Le Lan, Sheheryar Zaidi, Emilien Dupont, Yee Whye Teh, Hyunjik Kim

Figure 1 for LieTransformer: Equivariant self-attention for Lie Groups
Figure 2 for LieTransformer: Equivariant self-attention for Lie Groups
Figure 3 for LieTransformer: Equivariant self-attention for Lie Groups
Figure 4 for LieTransformer: Equivariant self-attention for Lie Groups

Group equivariant neural networks are used as building blocks of group invariant neural networks, which have been shown to improve generalisation performance and data efficiency through principled parameter sharing. Such works have mostly focused on group equivariant convolutions, building on the result that group equivariant linear maps are necessarily convolutions. In this work, we extend the scope of the literature to non-linear neural network modules, namely self-attention, that is emerging as a prominent building block of deep learning models. We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups. We demonstrate the generality of our approach by showing experimental results that are competitive to baseline methods on a wide range of tasks: shape counting on point clouds, molecular property regression and modelling particle trajectories under Hamiltonian dynamics.

Viaarxiv icon

STEER : Simple Temporal Regularization For Neural ODEs

Jun 18, 2020
Arnab Ghosh, Harkirat Singh Behl, Emilien Dupont, Philip H. S. Torr, Vinay Namboodiri

Figure 1 for STEER : Simple Temporal Regularization For Neural ODEs
Figure 2 for STEER : Simple Temporal Regularization For Neural ODEs
Figure 3 for STEER : Simple Temporal Regularization For Neural ODEs
Figure 4 for STEER : Simple Temporal Regularization For Neural ODEs

Training Neural Ordinary Differential Equations (ODEs) is often computationally expensive. Indeed, computing the forward pass of such models involves solving an ODE which can become arbitrarily complex during training. Recent works have shown that regularizing the dynamics of the ODE can partially alleviate this. In this paper we propose a new regularization technique: randomly sampling the end time of the ODE during training. The proposed regularization is simple to implement, has negligible overhead and is effective across a wide variety of tasks. Further, the technique is orthogonal to several other methods proposed to regularize the dynamics of ODEs and as such can be used in conjunction with them. We show through experiments on normalizing flows, time series models and image recognition that the proposed regularization can significantly decrease training time and even improve performance over baseline models.

Viaarxiv icon

Equivariant Neural Rendering

Jun 13, 2020
Emilien Dupont, Miguel Angel Bautista, Alex Colburn, Aditya Sankar, Carlos Guestrin, Josh Susskind, Qi Shan

Figure 1 for Equivariant Neural Rendering
Figure 2 for Equivariant Neural Rendering
Figure 3 for Equivariant Neural Rendering
Figure 4 for Equivariant Neural Rendering

We propose a framework for learning neural scene representations directly from images, without 3D supervision. Our key insight is that 3D structure can be imposed by ensuring that the learned representation transforms like a real 3D scene. Specifically, we introduce a loss which enforces equivariance of the scene representation with respect to 3D transformations. Our formulation allows us to infer and render scenes in real time while achieving comparable results to models requiring minutes for inference. In addition, we introduce two challenging new datasets for scene representation and neural rendering, including scenes with complex lighting and backgrounds. Through experiments, we show that our model achieves compelling results on these datasets as well as on standard ShapeNet benchmarks.

* ICML 2020 camera ready 
Viaarxiv icon