Alert button
Picture for Jeremy M. Cohen

Jeremy M. Cohen

Alert button

Adaptive Gradient Methods at the Edge of Stability

Jul 29, 2022
Jeremy M. Cohen, Behrooz Ghorbani, Shankar Krishnan, Naman Agarwal, Sourabh Medapati, Michal Badura, Daniel Suo, David Cardoze, Zachary Nado, George E. Dahl, Justin Gilmer

Figure 1 for Adaptive Gradient Methods at the Edge of Stability
Figure 2 for Adaptive Gradient Methods at the Edge of Stability
Figure 3 for Adaptive Gradient Methods at the Edge of Stability
Figure 4 for Adaptive Gradient Methods at the Edge of Stability

Very little is known about the training dynamics of adaptive gradient methods like Adam in deep learning. In this paper, we shed light on the behavior of these algorithms in the full-batch and sufficiently large batch settings. Specifically, we empirically demonstrate that during full-batch training, the maximum eigenvalue of the preconditioned Hessian typically equilibrates at a certain numerical value -- the stability threshold of a gradient descent algorithm. For Adam with step size $\eta$ and $\beta_1 = 0.9$, this stability threshold is $38/\eta$. Similar effects occur during minibatch training, especially as the batch size grows. Yet, even though adaptive methods train at the ``Adaptive Edge of Stability'' (AEoS), their behavior in this regime differs in a significant way from that of non-adaptive methods at the EoS. Whereas non-adaptive algorithms at the EoS are blocked from entering high-curvature regions of the loss landscape, adaptive gradient methods at the AEoS can keep advancing into high-curvature regions, while adapting the preconditioner to compensate. Our findings can serve as a foundation for the community's future understanding of adaptive gradient methods in deep learning.

Viaarxiv icon

Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability

Feb 26, 2021
Jeremy M. Cohen, Simran Kaur, Yuanzhi Li, J. Zico Kolter, Ameet Talwalkar

Figure 1 for Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability
Figure 2 for Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability
Figure 3 for Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability
Figure 4 for Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability

We empirically demonstrate that full-batch gradient descent on neural network training objectives typically operates in a regime we call the Edge of Stability. In this regime, the maximum eigenvalue of the training loss Hessian hovers just above the numerical value $2 / \text{(step size)}$, and the training loss behaves non-monotonically over short timescales, yet consistently decreases over long timescales. Since this behavior is inconsistent with several widespread presumptions in the field of optimization, our findings raise questions as to whether these presumptions are relevant to neural network training. We hope that our findings will inspire future efforts aimed at rigorously understanding optimization at the Edge of Stability. Code is available at https://github.com/locuslab/edge-of-stability.

* To appear in ICLR 2021. 72 pages, 107 figures 
Viaarxiv icon