Alert button
Picture for Ivo Danihelka

Ivo Danihelka

Alert button

Muesli: Combining Improvements in Policy Optimization

Apr 13, 2021
Matteo Hessel, Ivo Danihelka, Fabio Viola, Arthur Guez, Simon Schmitt, Laurent Sifre, Theophane Weber, David Silver, Hado van Hasselt

Figure 1 for Muesli: Combining Improvements in Policy Optimization
Figure 2 for Muesli: Combining Improvements in Policy Optimization
Figure 3 for Muesli: Combining Improvements in Policy Optimization
Figure 4 for Muesli: Combining Improvements in Policy Optimization

We propose a novel policy update that combines regularized policy optimization with model learning as an auxiliary loss. The update (henceforth Muesli) matches MuZero's state-of-the-art performance on Atari. Notably, Muesli does so without using deep search: it acts directly with a policy network and has computation speed comparable to model-free baselines. The Atari results are complemented by extensive ablations, and by additional results on continuous control and 9x9 Go.

Viaarxiv icon

Causally Correct Partial Models for Reinforcement Learning

Feb 07, 2020
Danilo J. Rezende, Ivo Danihelka, George Papamakarios, Nan Rosemary Ke, Ray Jiang, Theophane Weber, Karol Gregor, Hamza Merzic, Fabio Viola, Jane Wang, Jovana Mitrovic, Frederic Besse, Ioannis Antonoglou, Lars Buesing

Figure 1 for Causally Correct Partial Models for Reinforcement Learning
Figure 2 for Causally Correct Partial Models for Reinforcement Learning
Figure 3 for Causally Correct Partial Models for Reinforcement Learning
Figure 4 for Causally Correct Partial Models for Reinforcement Learning

In reinforcement learning, we can learn a model of future observations and rewards, and use it to plan the agent's next actions. However, jointly modeling future observations can be computationally expensive or even intractable if the observations are high-dimensional (e.g. images). For this reason, previous works have considered partial models, which model only part of the observation. In this paper, we show that partial models can be causally incorrect: they are confounded by the observations they don't model, and can therefore lead to incorrect planning. To address this, we introduce a general family of partial models that are provably causally correct, yet remain fast because they do not need to fully model future observations.

Viaarxiv icon

OpenSpiel: A Framework for Reinforcement Learning in Games

Oct 10, 2019
Marc Lanctot, Edward Lockhart, Jean-Baptiste Lespiau, Vinicius Zambaldi, Satyaki Upadhyay, Julien Pérolat, Sriram Srinivasan, Finbarr Timbers, Karl Tuyls, Shayegan Omidshafiei, Daniel Hennes, Dustin Morrill, Paul Muller, Timo Ewalds, Ryan Faulkner, János Kramár, Bart De Vylder, Brennan Saeta, James Bradbury, David Ding, Sebastian Borgeaud, Matthew Lai, Julian Schrittwieser, Thomas Anthony, Edward Hughes, Ivo Danihelka, Jonah Ryan-Davis

Figure 1 for OpenSpiel: A Framework for Reinforcement Learning in Games
Figure 2 for OpenSpiel: A Framework for Reinforcement Learning in Games
Figure 3 for OpenSpiel: A Framework for Reinforcement Learning in Games
Figure 4 for OpenSpiel: A Framework for Reinforcement Learning in Games

OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games. OpenSpiel supports n-player (single- and multi- agent) zero-sum, cooperative and general-sum, one-shot and sequential, strictly turn-taking and simultaneous-move, perfect and imperfect information games, as well as traditional multiagent environments such as (partially- and fully- observable) grid worlds and social dilemmas. OpenSpiel also includes tools to analyze learning dynamics and other common evaluation metrics. This document serves both as an overview of the code base and an introduction to the terminology, core concepts, and algorithms across the fields of reinforcement learning, computational game theory, and search.

Viaarxiv icon

The Cramer Distance as a Solution to Biased Wasserstein Gradients

May 30, 2017
Marc G. Bellemare, Ivo Danihelka, Will Dabney, Shakir Mohamed, Balaji Lakshminarayanan, Stephan Hoyer, Rémi Munos

Figure 1 for The Cramer Distance as a Solution to Biased Wasserstein Gradients
Figure 2 for The Cramer Distance as a Solution to Biased Wasserstein Gradients
Figure 3 for The Cramer Distance as a Solution to Biased Wasserstein Gradients
Figure 4 for The Cramer Distance as a Solution to Biased Wasserstein Gradients

The Wasserstein probability metric has received much attention from the machine learning community. Unlike the Kullback-Leibler divergence, which strictly measures change in probability, the Wasserstein metric reflects the underlying geometry between outcomes. The value of being sensitive to this geometry has been demonstrated, among others, in ordinal regression and generative modelling. In this paper we describe three natural properties of probability divergences that reflect requirements from machine learning: sum invariance, scale sensitivity, and unbiased sample gradients. The Wasserstein metric possesses the first two properties but, unlike the Kullback-Leibler divergence, does not possess the third. We provide empirical evidence suggesting that this is a serious issue in practice. Leveraging insights from probabilistic forecasting we propose an alternative to the Wasserstein metric, the Cram\'er distance. We show that the Cram\'er distance possesses all three desired properties, combining the best of the Wasserstein and Kullback-Leibler divergences. To illustrate the relevance of the Cram\'er distance in practice we design a new algorithm, the Cram\'er Generative Adversarial Network (GAN), and show that it performs significantly better than the related Wasserstein GAN.

Viaarxiv icon

Comparison of Maximum Likelihood and GAN-based training of Real NVPs

May 15, 2017
Ivo Danihelka, Balaji Lakshminarayanan, Benigno Uria, Daan Wierstra, Peter Dayan

Figure 1 for Comparison of Maximum Likelihood and GAN-based training of Real NVPs
Figure 2 for Comparison of Maximum Likelihood and GAN-based training of Real NVPs
Figure 3 for Comparison of Maximum Likelihood and GAN-based training of Real NVPs
Figure 4 for Comparison of Maximum Likelihood and GAN-based training of Real NVPs

We train a generator by maximum likelihood and we also train the same generator architecture by Wasserstein GAN. We then compare the generated samples, exact log-probability densities and approximate Wasserstein distances. We show that an independent critic trained to approximate Wasserstein distance between the validation set and the generator distribution helps detect overfitting. Finally, we use ideas from the one-shot learning literature to develop a novel fast learning critic.

Viaarxiv icon

Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes

Oct 27, 2016
Jack W Rae, Jonathan J Hunt, Tim Harley, Ivo Danihelka, Andrew Senior, Greg Wayne, Alex Graves, Timothy P Lillicrap

Figure 1 for Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes
Figure 2 for Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes
Figure 3 for Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes
Figure 4 for Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes

Neural networks augmented with external memory have the ability to learn algorithmic solutions to complex tasks. These models appear promising for applications such as language modeling and machine translation. However, they scale poorly in both space and time as the amount of memory grows --- limiting their applicability to real-world domains. Here, we present an end-to-end differentiable memory access scheme, which we call Sparse Access Memory (SAM), that retains the representational power of the original approaches whilst training efficiently with very large memories. We show that SAM achieves asymptotic lower bounds in space and time complexity, and find that an implementation runs $1,\!000\times$ faster and with $3,\!000\times$ less physical memory than non-sparse models. SAM learns with comparable data efficiency to existing models on a range of synthetic tasks and one-shot Omniglot character recognition, and can scale to tasks requiring $100,\!000$s of time steps and memories. As well, we show how our approach can be adapted for models that maintain temporal associations between memories, as with the recently introduced Differentiable Neural Computer.

* in 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain 
Viaarxiv icon

Video Pixel Networks

Oct 03, 2016
Nal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, Alex Graves, Koray Kavukcuoglu

Figure 1 for Video Pixel Networks
Figure 2 for Video Pixel Networks
Figure 3 for Video Pixel Networks
Figure 4 for Video Pixel Networks

We propose a probabilistic video model, the Video Pixel Network (VPN), that estimates the discrete joint distribution of the raw pixel values in a video. The model and the neural architecture reflect the time, space and color structure of video tensors and encode it as a four-dimensional dependency chain. The VPN approaches the best possible performance on the Moving MNIST benchmark, a leap over the previous state of the art, and the generated videos show only minor deviations from the ground truth. The VPN also produces detailed samples on the action-conditional Robotic Pushing benchmark and generalizes to the motion of novel objects.

* 16 pages 
Viaarxiv icon

Memory-Efficient Backpropagation Through Time

Jun 10, 2016
Audrūnas Gruslys, Remi Munos, Ivo Danihelka, Marc Lanctot, Alex Graves

Figure 1 for Memory-Efficient Backpropagation Through Time
Figure 2 for Memory-Efficient Backpropagation Through Time
Figure 3 for Memory-Efficient Backpropagation Through Time
Figure 4 for Memory-Efficient Backpropagation Through Time

We propose a novel approach to reduce memory consumption of the backpropagation through time (BPTT) algorithm when training recurrent neural networks (RNNs). Our approach uses dynamic programming to balance a trade-off between caching of intermediate results and recomputation. The algorithm is capable of tightly fitting within almost any user-set memory budget while finding an optimal execution policy minimizing the computational cost. Computational devices have limited memory capacity and maximizing a computational performance given a fixed memory budget is a practical use-case. We provide asymptotic computational upper bounds for various regimes. The algorithm is particularly effective for long sequences. For sequences of length 1000, our algorithm saves 95\% of memory usage while using only one third more time per iteration than the standard BPTT.

Viaarxiv icon