Picture for Jeffrey Pennington

Jeffrey Pennington

Scaling Exponents Across Parameterizations and Optimizers

Add code
Jul 08, 2024
Viaarxiv icon

4+3 Phases of Compute-Optimal Neural Scaling Laws

Add code
May 23, 2024
Viaarxiv icon

High dimensional analysis reveals conservative sharpening and a stochastic edge of stability

Add code
Apr 30, 2024
Viaarxiv icon

Training LLMs over Neurally Compressed Text

Add code
Apr 04, 2024
Figure 1 for Training LLMs over Neurally Compressed Text
Figure 2 for Training LLMs over Neurally Compressed Text
Figure 3 for Training LLMs over Neurally Compressed Text
Figure 4 for Training LLMs over Neurally Compressed Text
Viaarxiv icon

Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models

Add code
Dec 22, 2023
Figure 1 for Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
Figure 2 for Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
Figure 3 for Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
Figure 4 for Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
Viaarxiv icon

Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?

Add code
Nov 15, 2023
Figure 1 for Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?
Figure 2 for Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?
Figure 3 for Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?
Figure 4 for Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?
Viaarxiv icon

Small-scale proxies for large-scale Transformer training instabilities

Add code
Sep 25, 2023
Figure 1 for Small-scale proxies for large-scale Transformer training instabilities
Figure 2 for Small-scale proxies for large-scale Transformer training instabilities
Figure 3 for Small-scale proxies for large-scale Transformer training instabilities
Figure 4 for Small-scale proxies for large-scale Transformer training instabilities
Viaarxiv icon

Second-order regression models exhibit progressive sharpening to the edge of stability

Add code
Oct 10, 2022
Figure 1 for Second-order regression models exhibit progressive sharpening to the edge of stability
Figure 2 for Second-order regression models exhibit progressive sharpening to the edge of stability
Figure 3 for Second-order regression models exhibit progressive sharpening to the edge of stability
Figure 4 for Second-order regression models exhibit progressive sharpening to the edge of stability
Viaarxiv icon

Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm

Add code
Jul 11, 2022
Figure 1 for Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm
Figure 2 for Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm
Figure 3 for Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm
Figure 4 for Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm
Viaarxiv icon

Wide Bayesian neural networks have a simple weight posterior: theory and accelerated sampling

Add code
Jun 15, 2022
Figure 1 for Wide Bayesian neural networks have a simple weight posterior: theory and accelerated sampling
Figure 2 for Wide Bayesian neural networks have a simple weight posterior: theory and accelerated sampling
Figure 3 for Wide Bayesian neural networks have a simple weight posterior: theory and accelerated sampling
Figure 4 for Wide Bayesian neural networks have a simple weight posterior: theory and accelerated sampling
Viaarxiv icon