Alert button
Picture for Andrew Gordon Wilson

Andrew Gordon Wilson

Alert button

Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling

Add code
Bookmark button
Alert button
Feb 25, 2021
Gregory W. Benton, Wesley J. Maddox, Sanae Lotfi, Andrew Gordon Wilson

Figure 1 for Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling
Figure 2 for Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling
Figure 3 for Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling
Figure 4 for Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling
Viaarxiv icon

Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit Constraints

Add code
Bookmark button
Alert button
Oct 26, 2020
Marc Finzi, Ke Alexander Wang, Andrew Gordon Wilson

Figure 1 for Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit Constraints
Figure 2 for Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit Constraints
Figure 3 for Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit Constraints
Figure 4 for Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit Constraints
Viaarxiv icon

Learning Invariances in Neural Networks

Add code
Bookmark button
Alert button
Oct 22, 2020
Gregory Benton, Marc Finzi, Pavel Izmailov, Andrew Gordon Wilson

Figure 1 for Learning Invariances in Neural Networks
Figure 2 for Learning Invariances in Neural Networks
Figure 3 for Learning Invariances in Neural Networks
Figure 4 for Learning Invariances in Neural Networks
Viaarxiv icon

On the model-based stochastic value gradient for continuous reinforcement learning

Add code
Bookmark button
Alert button
Aug 28, 2020
Brandon Amos, Samuel Stanton, Denis Yarats, Andrew Gordon Wilson

Figure 1 for On the model-based stochastic value gradient for continuous reinforcement learning
Figure 2 for On the model-based stochastic value gradient for continuous reinforcement learning
Figure 3 for On the model-based stochastic value gradient for continuous reinforcement learning
Figure 4 for On the model-based stochastic value gradient for continuous reinforcement learning
Viaarxiv icon

Improving GAN Training with Probability Ratio Clipping and Sample Reweighting

Add code
Bookmark button
Alert button
Jun 30, 2020
Yue Wu, Pan Zhou, Andrew Gordon Wilson, Eric P. Xing, Zhiting Hu

Figure 1 for Improving GAN Training with Probability Ratio Clipping and Sample Reweighting
Figure 2 for Improving GAN Training with Probability Ratio Clipping and Sample Reweighting
Figure 3 for Improving GAN Training with Probability Ratio Clipping and Sample Reweighting
Figure 4 for Improving GAN Training with Probability Ratio Clipping and Sample Reweighting
Viaarxiv icon

Why Normalizing Flows Fail to Detect Out-of-Distribution Data

Add code
Bookmark button
Alert button
Jun 15, 2020
Polina Kirichenko, Pavel Izmailov, Andrew Gordon Wilson

Figure 1 for Why Normalizing Flows Fail to Detect Out-of-Distribution Data
Figure 2 for Why Normalizing Flows Fail to Detect Out-of-Distribution Data
Figure 3 for Why Normalizing Flows Fail to Detect Out-of-Distribution Data
Figure 4 for Why Normalizing Flows Fail to Detect Out-of-Distribution Data
Viaarxiv icon

Bayesian Deep Learning and a Probabilistic Perspective of Generalization

Add code
Bookmark button
Alert button
Mar 17, 2020
Andrew Gordon Wilson, Pavel Izmailov

Figure 1 for Bayesian Deep Learning and a Probabilistic Perspective of Generalization
Figure 2 for Bayesian Deep Learning and a Probabilistic Perspective of Generalization
Figure 3 for Bayesian Deep Learning and a Probabilistic Perspective of Generalization
Figure 4 for Bayesian Deep Learning and a Probabilistic Perspective of Generalization
Viaarxiv icon

Rethinking Parameter Counting in Deep Models: Effective Dimensionality Revisited

Add code
Bookmark button
Alert button
Mar 04, 2020
Wesley J. Maddox, Gregory Benton, Andrew Gordon Wilson

Figure 1 for Rethinking Parameter Counting in Deep Models: Effective Dimensionality Revisited
Figure 2 for Rethinking Parameter Counting in Deep Models: Effective Dimensionality Revisited
Figure 3 for Rethinking Parameter Counting in Deep Models: Effective Dimensionality Revisited
Figure 4 for Rethinking Parameter Counting in Deep Models: Effective Dimensionality Revisited
Viaarxiv icon

Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data

Add code
Bookmark button
Alert button
Feb 25, 2020
Marc Finzi, Samuel Stanton, Pavel Izmailov, Andrew Gordon Wilson

Figure 1 for Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data
Figure 2 for Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data
Figure 3 for Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data
Figure 4 for Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data
Viaarxiv icon