Alert button
Picture for Eric Noland

Eric Noland

Alert button

DeepMind

Training Compute-Optimal Large Language Models

Mar 29, 2022
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, Laurent Sifre

Figure 1 for Training Compute-Optimal Large Language Models
Figure 2 for Training Compute-Optimal Large Language Models
Figure 3 for Training Compute-Optimal Large Language Models
Figure 4 for Training Compute-Optimal Large Language Models

We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget. We find that current large language models are significantly undertrained, a consequence of the recent focus on scaling language models whilst keeping the amount of training data constant. By training over \nummodels language models ranging from 70 million to over 16 billion parameters on 5 to 500 billion tokens, we find that for compute-optimal training, the model size and the number of training tokens should be scaled equally: for every doubling of model size the number of training tokens should also be doubled. We test this hypothesis by training a predicted compute-optimal model, \chinchilla, that uses the same compute budget as \gopher but with 70B parameters and 4$\times$ more more data. \chinchilla uniformly and significantly outperforms \Gopher (280B), GPT-3 (175B), Jurassic-1 (178B), and Megatron-Turing NLG (530B) on a large range of downstream evaluation tasks. This also means that \chinchilla uses substantially less compute for fine-tuning and inference, greatly facilitating downstream usage. As a highlight, \chinchilla reaches a state-of-the-art average accuracy of 67.5\% on the MMLU benchmark, greater than a 7\% improvement over \gopher.

Viaarxiv icon

A Short Note on the Kinetics-700-2020 Human Action Dataset

Oct 21, 2020
Lucas Smaira, João Carreira, Eric Noland, Ellen Clancy, Amy Wu, Andrew Zisserman

Figure 1 for A Short Note on the Kinetics-700-2020 Human Action Dataset
Figure 2 for A Short Note on the Kinetics-700-2020 Human Action Dataset
Figure 3 for A Short Note on the Kinetics-700-2020 Human Action Dataset
Figure 4 for A Short Note on the Kinetics-700-2020 Human Action Dataset

We describe the 2020 edition of the DeepMind Kinetics human action dataset, which replenishes and extends the Kinetics-700 dataset. In this new version, there are at least 700 video clips from different YouTube videos for each of the 700 classes. This paper details the changes introduced for this new release of the dataset and includes a comprehensive set of statistics as well as baseline results using the I3D network.

Viaarxiv icon

A Short Note on the Kinetics-700 Human Action Dataset

Jul 15, 2019
Joao Carreira, Eric Noland, Chloe Hillier, Andrew Zisserman

Figure 1 for A Short Note on the Kinetics-700 Human Action Dataset
Figure 2 for A Short Note on the Kinetics-700 Human Action Dataset
Figure 3 for A Short Note on the Kinetics-700 Human Action Dataset
Figure 4 for A Short Note on the Kinetics-700 Human Action Dataset

We describe an extension of the DeepMind Kinetics human action dataset from 600 classes to 700 classes, where for each class there are at least 600 video clips from different YouTube videos. This paper details the changes introduced for this new release of the dataset, and includes a comprehensive set of statistics as well as baseline results using the I3D neural network architecture.

* arXiv admin note: substantial text overlap with arXiv:1808.01340 
Viaarxiv icon

A Short Note about Kinetics-600

Aug 03, 2018
Joao Carreira, Eric Noland, Andras Banki-Horvath, Chloe Hillier, Andrew Zisserman

Figure 1 for A Short Note about Kinetics-600
Figure 2 for A Short Note about Kinetics-600

We describe an extension of the DeepMind Kinetics human action dataset from 400 classes, each with at least 400 video clips, to 600 classes, each with at least 600 video clips. In order to scale up the dataset we changed the data collection process so it uses multiple queries per class, with some of them in a language other than english -- portuguese. This paper details the changes between the two versions of the dataset and includes a comprehensive set of statistics of the new version as well as baseline results using the I3D neural network architecture. The paper is a companion to the release of the ground truth labels for the public test set.

* Companion to public release of kinetics-600 test set labels 
Viaarxiv icon