Alert button
Picture for Luis Barba

Luis Barba

Alert button

Multilayer Lookahead: a Nested Version of Lookahead

Oct 27, 2021
Denys Pushkin, Luis Barba

Figure 1 for Multilayer Lookahead: a Nested Version of Lookahead
Figure 2 for Multilayer Lookahead: a Nested Version of Lookahead
Figure 3 for Multilayer Lookahead: a Nested Version of Lookahead
Figure 4 for Multilayer Lookahead: a Nested Version of Lookahead

In recent years, SGD and its variants have become the standard tool to train Deep Neural Networks. In this paper, we focus on the recently proposed variant Lookahead, which improves upon SGD in a wide range of applications. Following this success, we study an extension of this algorithm, the \emph{Multilayer Lookahead} optimizer, which recursively wraps Lookahead around itself. We prove the convergence of Multilayer Lookahead with two layers to a stationary point of smooth non-convex functions with $O(\frac{1}{\sqrt{T}})$ rate. We also justify the improved generalization of both Lookahead over SGD, and of Multilayer Lookahead over Lookahead, by showing how they amplify the implicit regularization effect of SGD. We empirically verify our results and show that Multilayer Lookahead outperforms Lookahead on CIFAR-10 and CIFAR-100 classification tasks, and on GANs training on the MNIST dataset.

Viaarxiv icon

Implicit Gradient Alignment in Distributed and Federated Learning

Jun 25, 2021
Yatin Dandi, Luis Barba, Martin Jaggi

Figure 1 for Implicit Gradient Alignment in Distributed and Federated Learning
Figure 2 for Implicit Gradient Alignment in Distributed and Federated Learning
Figure 3 for Implicit Gradient Alignment in Distributed and Federated Learning
Figure 4 for Implicit Gradient Alignment in Distributed and Federated Learning

A major obstacle to achieving global convergence in distributed and federated learning is the misalignment of gradients across clients, or mini-batches due to heterogeneity and stochasticity of the distributed data. One way to alleviate this problem is to encourage the alignment of gradients across different clients throughout training. Our analysis reveals that this goal can be accomplished by utilizing the right optimization method that replicates the implicit regularization effect of SGD, leading to gradient alignment as well as improvements in test accuracies. Since the existence of this regularization in SGD completely relies on the sequential use of different mini-batches during training, it is inherently absent when training with large mini-batches. To obtain the generalization benefits of this regularization while increasing parallelism, we propose a novel GradAlign algorithm that induces the same implicit regularization while allowing the use of arbitrarily large batches in each update. We experimentally validate the benefit of our algorithm in different distributed and federated learning settings.

Viaarxiv icon

Dynamic Model Pruning with Feedback

Jun 12, 2020
Tao Lin, Sebastian U. Stich, Luis Barba, Daniil Dmitriev, Martin Jaggi

Figure 1 for Dynamic Model Pruning with Feedback
Figure 2 for Dynamic Model Pruning with Feedback
Figure 3 for Dynamic Model Pruning with Feedback
Figure 4 for Dynamic Model Pruning with Feedback

Deep neural networks often have millions of parameters. This can hinder their deployment to low-end devices, not only due to high memory requirements but also because of increased latency at inference. We propose a novel model compression method that generates a sparse trained model without additional overhead: by allowing (i) dynamic allocation of the sparsity pattern and (ii) incorporating feedback signal to reactivate prematurely pruned weights we obtain a performant sparse model in one single training pass (retraining is not needed, but can further improve the performance). We evaluate our method on CIFAR-10 and ImageNet, and show that the obtained sparse models can reach the state-of-the-art performance of dense models. Moreover, their performance surpasses that of models generated by all previously proposed pruning schemes.

* appearing at ICLR 2020 
Viaarxiv icon