Alert button
Picture for Denys Pushkin

Denys Pushkin

Alert button

Multilayer Lookahead: a Nested Version of Lookahead

Oct 27, 2021
Denys Pushkin, Luis Barba

Figure 1 for Multilayer Lookahead: a Nested Version of Lookahead
Figure 2 for Multilayer Lookahead: a Nested Version of Lookahead
Figure 3 for Multilayer Lookahead: a Nested Version of Lookahead
Figure 4 for Multilayer Lookahead: a Nested Version of Lookahead

In recent years, SGD and its variants have become the standard tool to train Deep Neural Networks. In this paper, we focus on the recently proposed variant Lookahead, which improves upon SGD in a wide range of applications. Following this success, we study an extension of this algorithm, the \emph{Multilayer Lookahead} optimizer, which recursively wraps Lookahead around itself. We prove the convergence of Multilayer Lookahead with two layers to a stationary point of smooth non-convex functions with $O(\frac{1}{\sqrt{T}})$ rate. We also justify the improved generalization of both Lookahead over SGD, and of Multilayer Lookahead over Lookahead, by showing how they amplify the implicit regularization effect of SGD. We empirically verify our results and show that Multilayer Lookahead outperforms Lookahead on CIFAR-10 and CIFAR-100 classification tasks, and on GANs training on the MNIST dataset.

Viaarxiv icon