Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world. In this paper, we present an analysis of Transformer-based language model performance across a wide range of model scales -- from models with tens of millions of parameters up to a 280 billion parameter model called Gopher. These models are evaluated on 152 diverse tasks, achieving state-of-the-art performance across the majority. Gains from scale are largest in areas such as reading comprehension, fact-checking, and the identification of toxic language, but logical and mathematical reasoning see less benefit. We provide a holistic analysis of the training dataset and model's behaviour, covering the intersection of model scale with bias and toxicity. Finally we discuss the application of language models to AI safety and the mitigation of downstream harms.
People learn motor activities best when they are conscious of their errors and make a concerted effort to correct them. While haptic interfaces can facilitate motor training, existing interfaces are often bulky and do not always ensure post-training skill retention. Here, we describe a programmable haptic sleeve composed of textile-based electroadhesive clutches for skill acquisition and retention. We show its functionality in a motor learning study where users control a drone's movement using elbow joint rotation. Haptic feedback is used to restrain elbow motion and make users aware of their errors. This helps users consciously learn to avoid errors from occurring. While all subjects exhibited similar performance during the baseline phase of motor learning, those subjects who received haptic feedback from the haptic sleeve committed 23.5% fewer errors than subjects in the control group during the evaluation phase. The results show that the sleeve helps users retain and transfer motor skills better than visual feedback alone. This work shows the potential for fabric-based haptic interfaces as a training aid for motor tasks in the fields of rehabilitation and teleoperation.
We describe the 2020 edition of the DeepMind Kinetics human action dataset, which replenishes and extends the Kinetics-700 dataset. In this new version, there are at least 700 video clips from different YouTube videos for each of the 700 classes. This paper details the changes introduced for this new release of the dataset and includes a comprehensive set of statistics as well as baseline results using the I3D network.