Alert button
Picture for Santiago Akle Serrano

Santiago Akle Serrano

Alert button

Stochastic Weight Averaging in Parallel: Large-Batch Training that Generalizes Well

Jan 07, 2020
Vipul Gupta, Santiago Akle Serrano, Dennis DeCoste

Figure 1 for Stochastic Weight Averaging in Parallel: Large-Batch Training that Generalizes Well
Figure 2 for Stochastic Weight Averaging in Parallel: Large-Batch Training that Generalizes Well
Figure 3 for Stochastic Weight Averaging in Parallel: Large-Batch Training that Generalizes Well
Figure 4 for Stochastic Weight Averaging in Parallel: Large-Batch Training that Generalizes Well

We propose Stochastic Weight Averaging in Parallel (SWAP), an algorithm to accelerate DNN training. Our algorithm uses large mini-batches to compute an approximate solution quickly and then refines it by averaging the weights of multiple models computed independently and in parallel. The resulting models generalize equally well as those trained with small mini-batches but are produced in a substantially shorter time. We demonstrate the reduction in training time and the good generalization performance of the resulting models on the computer vision datasets CIFAR10, CIFAR100, and ImageNet.

Viaarxiv icon

Democratizing Production-Scale Distributed Deep Learning

Nov 03, 2018
Minghuang Ma, Hadi Pouransari, Daniel Chao, Saurabh Adya, Santiago Akle Serrano, Yi Qin, Dan Gimnicher, Dominic Walsh

Figure 1 for Democratizing Production-Scale Distributed Deep Learning
Figure 2 for Democratizing Production-Scale Distributed Deep Learning
Figure 3 for Democratizing Production-Scale Distributed Deep Learning
Figure 4 for Democratizing Production-Scale Distributed Deep Learning

The interest and demand for training deep neural networks have been experiencing rapid growth, spanning a wide range of applications in both academia and industry. However, training them distributed and at scale remains difficult due to the complex ecosystem of tools and hardware involved. One consequence is that the responsibility of orchestrating these complex components is often left to one-off scripts and glue code customized for specific problems. To address these restrictions, we introduce \emph{Alchemist} - an internal service built at Apple from the ground up for \emph{easy}, \emph{fast}, and \emph{scalable} distributed training. We discuss its design, implementation, and examples of running different flavors of distributed training. We also present case studies of its internal adoption in the development of autonomous systems, where training times have been reduced by 10x to keep up with the ever-growing data collection.

Viaarxiv icon