Alert button
Picture for Jeffrey Pennington

Jeffrey Pennington

Alert button

Covariate Shift in High-Dimensional Random Feature Regression

Add code
Bookmark button
Alert button
Nov 16, 2021
Nilesh Tripuraneni, Ben Adlam, Jeffrey Pennington

Figure 1 for Covariate Shift in High-Dimensional Random Feature Regression
Figure 2 for Covariate Shift in High-Dimensional Random Feature Regression
Figure 3 for Covariate Shift in High-Dimensional Random Feature Regression
Figure 4 for Covariate Shift in High-Dimensional Random Feature Regression
Viaarxiv icon

Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition

Add code
Bookmark button
Alert button
Nov 04, 2020
Ben Adlam, Jeffrey Pennington

Figure 1 for Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition
Figure 2 for Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition
Viaarxiv icon

Exploring the Uncertainty Properties of Neural Networks' Implicit Priors in the Infinite-Width Limit

Add code
Bookmark button
Alert button
Oct 14, 2020
Ben Adlam, Jaehoon Lee, Lechao Xiao, Jeffrey Pennington, Jasper Snoek

Figure 1 for Exploring the Uncertainty Properties of Neural Networks' Implicit Priors in the Infinite-Width Limit
Figure 2 for Exploring the Uncertainty Properties of Neural Networks' Implicit Priors in the Infinite-Width Limit
Figure 3 for Exploring the Uncertainty Properties of Neural Networks' Implicit Priors in the Infinite-Width Limit
Figure 4 for Exploring the Uncertainty Properties of Neural Networks' Implicit Priors in the Infinite-Width Limit
Viaarxiv icon

Temperature check: theory and practice for training models with softmax-cross-entropy losses

Add code
Bookmark button
Alert button
Oct 14, 2020
Atish Agarwala, Jeffrey Pennington, Yann Dauphin, Sam Schoenholz

Figure 1 for Temperature check: theory and practice for training models with softmax-cross-entropy losses
Figure 2 for Temperature check: theory and practice for training models with softmax-cross-entropy losses
Figure 3 for Temperature check: theory and practice for training models with softmax-cross-entropy losses
Figure 4 for Temperature check: theory and practice for training models with softmax-cross-entropy losses
Viaarxiv icon

Finite Versus Infinite Neural Networks: an Empirical Study

Add code
Bookmark button
Alert button
Sep 08, 2020
Jaehoon Lee, Samuel S. Schoenholz, Jeffrey Pennington, Ben Adlam, Lechao Xiao, Roman Novak, Jascha Sohl-Dickstein

Figure 1 for Finite Versus Infinite Neural Networks: an Empirical Study
Figure 2 for Finite Versus Infinite Neural Networks: an Empirical Study
Figure 3 for Finite Versus Infinite Neural Networks: an Empirical Study
Figure 4 for Finite Versus Infinite Neural Networks: an Empirical Study
Viaarxiv icon

The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization

Add code
Bookmark button
Alert button
Aug 15, 2020
Ben Adlam, Jeffrey Pennington

Figure 1 for The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization
Figure 2 for The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization
Figure 3 for The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization
Figure 4 for The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization
Viaarxiv icon

The Surprising Simplicity of the Early-Time Learning Dynamics of Neural Networks

Add code
Bookmark button
Alert button
Jun 25, 2020
Wei Hu, Lechao Xiao, Ben Adlam, Jeffrey Pennington

Figure 1 for The Surprising Simplicity of the Early-Time Learning Dynamics of Neural Networks
Figure 2 for The Surprising Simplicity of the Early-Time Learning Dynamics of Neural Networks
Figure 3 for The Surprising Simplicity of the Early-Time Learning Dynamics of Neural Networks
Figure 4 for The Surprising Simplicity of the Early-Time Learning Dynamics of Neural Networks
Viaarxiv icon

Exact posterior distributions of wide Bayesian neural networks

Add code
Bookmark button
Alert button
Jun 18, 2020
Jiri Hron, Yasaman Bahri, Roman Novak, Jeffrey Pennington, Jascha Sohl-Dickstein

Figure 1 for Exact posterior distributions of wide Bayesian neural networks
Figure 2 for Exact posterior distributions of wide Bayesian neural networks
Viaarxiv icon

Provable Benefit of Orthogonal Initialization in Optimizing Deep Linear Networks

Add code
Bookmark button
Alert button
Jan 16, 2020
Wei Hu, Lechao Xiao, Jeffrey Pennington

Figure 1 for Provable Benefit of Orthogonal Initialization in Optimizing Deep Linear Networks
Figure 2 for Provable Benefit of Orthogonal Initialization in Optimizing Deep Linear Networks
Viaarxiv icon