Picture for Gavia Gray

Gavia Gray

Power Lines: Scaling Laws for Weight Decay and Batch Size in LLM Pre-training

Add code
May 19, 2025
Viaarxiv icon

Straight to Zero: Why Linearly Decaying the Learning Rate to Zero Works Best for LLMs

Add code
Feb 21, 2025
Viaarxiv icon

Normalization Layer Per-Example Gradients are Sufficient to Predict Gradient Noise Scale in Transformers

Add code
Nov 01, 2024
Viaarxiv icon