Alert button
Picture for Mark Rowland

Mark Rowland

Alert button

Game Plan: What AI can do for Football, and What Football can do for AI

Nov 18, 2020
Karl Tuyls, Shayegan Omidshafiei, Paul Muller, Zhe Wang, Jerome Connor, Daniel Hennes, Ian Graham, William Spearman, Tim Waskett, Dafydd Steele, Pauline Luc, Adria Recasens, Alexandre Galashov, Gregory Thornton, Romuald Elie, Pablo Sprechmann, Pol Moreno, Kris Cao, Marta Garnelo, Praneet Dutta, Michal Valko, Nicolas Heess, Alex Bridgland, Julien Perolat, Bart De Vylder, Ali Eslami, Mark Rowland, Andrew Jaegle, Remi Munos, Trevor Back, Razia Ahamed, Simon Bouton, Nathalie Beauguerlange, Jackson Broshear, Thore Graepel, Demis Hassabis

Figure 1 for Game Plan: What AI can do for Football, and What Football can do for AI
Figure 2 for Game Plan: What AI can do for Football, and What Football can do for AI
Figure 3 for Game Plan: What AI can do for Football, and What Football can do for AI
Figure 4 for Game Plan: What AI can do for Football, and What Football can do for AI
Viaarxiv icon

Revisiting Fundamentals of Experience Replay

Jul 13, 2020
William Fedus, Prajit Ramachandran, Rishabh Agarwal, Yoshua Bengio, Hugo Larochelle, Mark Rowland, Will Dabney

Figure 1 for Revisiting Fundamentals of Experience Replay
Figure 2 for Revisiting Fundamentals of Experience Replay
Figure 3 for Revisiting Fundamentals of Experience Replay
Figure 4 for Revisiting Fundamentals of Experience Replay
Viaarxiv icon

The Value-Improvement Path: Towards Better Representations for Reinforcement Learning

Jun 03, 2020
Will Dabney, André Barreto, Mark Rowland, Robert Dadashi, John Quan, Marc G. Bellemare, David Silver

Figure 1 for The Value-Improvement Path: Towards Better Representations for Reinforcement Learning
Figure 2 for The Value-Improvement Path: Towards Better Representations for Reinforcement Learning
Figure 3 for The Value-Improvement Path: Towards Better Representations for Reinforcement Learning
Figure 4 for The Value-Improvement Path: Towards Better Representations for Reinforcement Learning
Viaarxiv icon

Navigating the Landscape of Games

May 04, 2020
Shayegan Omidshafiei, Karl Tuyls, Wojciech M. Czarnecki, Francisco C. Santos, Mark Rowland, Jerome Connor, Daniel Hennes, Paul Muller, Julien Perolat, Bart De Vylder, Audrunas Gruslys, Remi Munos

Figure 1 for Navigating the Landscape of Games
Figure 2 for Navigating the Landscape of Games
Figure 3 for Navigating the Landscape of Games
Figure 4 for Navigating the Landscape of Games
Viaarxiv icon

From Poincaré Recurrence to Convergence in Imperfect Information Games: Finding Equilibrium via Regularization

Feb 19, 2020
Julien Perolat, Remi Munos, Jean-Baptiste Lespiau, Shayegan Omidshafiei, Mark Rowland, Pedro Ortega, Neil Burch, Thomas Anthony, David Balduzzi, Bart De Vylder, Georgios Piliouras, Marc Lanctot, Karl Tuyls

Figure 1 for From Poincaré Recurrence to Convergence in Imperfect Information Games: Finding Equilibrium via Regularization
Figure 2 for From Poincaré Recurrence to Convergence in Imperfect Information Games: Finding Equilibrium via Regularization
Figure 3 for From Poincaré Recurrence to Convergence in Imperfect Information Games: Finding Equilibrium via Regularization
Figure 4 for From Poincaré Recurrence to Convergence in Imperfect Information Games: Finding Equilibrium via Regularization
Viaarxiv icon

Multiagent Evaluation under Incomplete Information

Oct 30, 2019
Mark Rowland, Shayegan Omidshafiei, Karl Tuyls, Julien Perolat, Michal Valko, Georgios Piliouras, Remi Munos

Figure 1 for Multiagent Evaluation under Incomplete Information
Figure 2 for Multiagent Evaluation under Incomplete Information
Figure 3 for Multiagent Evaluation under Incomplete Information
Figure 4 for Multiagent Evaluation under Incomplete Information
Viaarxiv icon

Conditional Importance Sampling for Off-Policy Learning

Oct 16, 2019
Mark Rowland, Anna Harutyunyan, Hado van Hasselt, Diana Borsa, Tom Schaul, Rémi Munos, Will Dabney

Figure 1 for Conditional Importance Sampling for Off-Policy Learning
Figure 2 for Conditional Importance Sampling for Off-Policy Learning
Figure 3 for Conditional Importance Sampling for Off-Policy Learning
Figure 4 for Conditional Importance Sampling for Off-Policy Learning
Viaarxiv icon

Adaptive Trade-Offs in Off-Policy Learning

Oct 16, 2019
Mark Rowland, Will Dabney, Rémi Munos

Figure 1 for Adaptive Trade-Offs in Off-Policy Learning
Figure 2 for Adaptive Trade-Offs in Off-Policy Learning
Figure 3 for Adaptive Trade-Offs in Off-Policy Learning
Figure 4 for Adaptive Trade-Offs in Off-Policy Learning
Viaarxiv icon

A Generalized Training Approach for Multiagent Learning

Sep 27, 2019
Paul Muller, Shayegan Omidshafiei, Mark Rowland, Karl Tuyls, Julien Perolat, Siqi Liu, Daniel Hennes, Luke Marris, Marc Lanctot, Edward Hughes, Zhe Wang, Guy Lever, Nicolas Heess, Thore Graepel, Remi Munos

Figure 1 for A Generalized Training Approach for Multiagent Learning
Figure 2 for A Generalized Training Approach for Multiagent Learning
Figure 3 for A Generalized Training Approach for Multiagent Learning
Figure 4 for A Generalized Training Approach for Multiagent Learning
Viaarxiv icon