Picture for Jeffrey Pennington

Jeffrey Pennington

Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm

Add code
Jul 11, 2022
Figure 1 for Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm
Figure 2 for Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm
Figure 3 for Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm
Figure 4 for Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm
Viaarxiv icon

Wide Bayesian neural networks have a simple weight posterior: theory and accelerated sampling

Add code
Jun 15, 2022
Figure 1 for Wide Bayesian neural networks have a simple weight posterior: theory and accelerated sampling
Figure 2 for Wide Bayesian neural networks have a simple weight posterior: theory and accelerated sampling
Figure 3 for Wide Bayesian neural networks have a simple weight posterior: theory and accelerated sampling
Figure 4 for Wide Bayesian neural networks have a simple weight posterior: theory and accelerated sampling
Viaarxiv icon

Implicit Regularization or Implicit Conditioning? Exact Risk Trajectories of SGD in High Dimensions

Add code
Jun 15, 2022
Figure 1 for Implicit Regularization or Implicit Conditioning? Exact Risk Trajectories of SGD in High Dimensions
Figure 2 for Implicit Regularization or Implicit Conditioning? Exact Risk Trajectories of SGD in High Dimensions
Figure 3 for Implicit Regularization or Implicit Conditioning? Exact Risk Trajectories of SGD in High Dimensions
Figure 4 for Implicit Regularization or Implicit Conditioning? Exact Risk Trajectories of SGD in High Dimensions
Viaarxiv icon

Precise Learning Curves and Higher-Order Scaling Limits for Dot Product Kernel Regression

Add code
May 30, 2022
Figure 1 for Precise Learning Curves and Higher-Order Scaling Limits for Dot Product Kernel Regression
Figure 2 for Precise Learning Curves and Higher-Order Scaling Limits for Dot Product Kernel Regression
Figure 3 for Precise Learning Curves and Higher-Order Scaling Limits for Dot Product Kernel Regression
Figure 4 for Precise Learning Curves and Higher-Order Scaling Limits for Dot Product Kernel Regression
Viaarxiv icon

Homogenization of SGD in high-dimensions: Exact dynamics and generalization properties

Add code
May 14, 2022
Figure 1 for Homogenization of SGD in high-dimensions: Exact dynamics and generalization properties
Figure 2 for Homogenization of SGD in high-dimensions: Exact dynamics and generalization properties
Figure 3 for Homogenization of SGD in high-dimensions: Exact dynamics and generalization properties
Figure 4 for Homogenization of SGD in high-dimensions: Exact dynamics and generalization properties
Viaarxiv icon

Covariate Shift in High-Dimensional Random Feature Regression

Add code
Nov 16, 2021
Figure 1 for Covariate Shift in High-Dimensional Random Feature Regression
Figure 2 for Covariate Shift in High-Dimensional Random Feature Regression
Figure 3 for Covariate Shift in High-Dimensional Random Feature Regression
Figure 4 for Covariate Shift in High-Dimensional Random Feature Regression
Viaarxiv icon

Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition

Add code
Nov 04, 2020
Figure 1 for Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition
Figure 2 for Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition
Viaarxiv icon

Exploring the Uncertainty Properties of Neural Networks' Implicit Priors in the Infinite-Width Limit

Add code
Oct 14, 2020
Figure 1 for Exploring the Uncertainty Properties of Neural Networks' Implicit Priors in the Infinite-Width Limit
Figure 2 for Exploring the Uncertainty Properties of Neural Networks' Implicit Priors in the Infinite-Width Limit
Figure 3 for Exploring the Uncertainty Properties of Neural Networks' Implicit Priors in the Infinite-Width Limit
Figure 4 for Exploring the Uncertainty Properties of Neural Networks' Implicit Priors in the Infinite-Width Limit
Viaarxiv icon

Temperature check: theory and practice for training models with softmax-cross-entropy losses

Add code
Oct 14, 2020
Figure 1 for Temperature check: theory and practice for training models with softmax-cross-entropy losses
Figure 2 for Temperature check: theory and practice for training models with softmax-cross-entropy losses
Figure 3 for Temperature check: theory and practice for training models with softmax-cross-entropy losses
Figure 4 for Temperature check: theory and practice for training models with softmax-cross-entropy losses
Viaarxiv icon

Finite Versus Infinite Neural Networks: an Empirical Study

Add code
Sep 08, 2020
Figure 1 for Finite Versus Infinite Neural Networks: an Empirical Study
Figure 2 for Finite Versus Infinite Neural Networks: an Empirical Study
Figure 3 for Finite Versus Infinite Neural Networks: an Empirical Study
Figure 4 for Finite Versus Infinite Neural Networks: an Empirical Study
Viaarxiv icon