Alert button
Picture for Edward Lockhart

Edward Lockhart

Alert button

Mastering the Game of Stratego with Model-Free Multiagent Reinforcement Learning

Jun 30, 2022
Julien Perolat, Bart de Vylder, Daniel Hennes, Eugene Tarassov, Florian Strub, Vincent de Boer, Paul Muller, Jerome T. Connor, Neil Burch, Thomas Anthony, Stephen McAleer, Romuald Elie, Sarah H. Cen, Zhe Wang, Audrunas Gruslys, Aleksandra Malysheva, Mina Khan, Sherjil Ozair, Finbarr Timbers, Toby Pohlen, Tom Eccles, Mark Rowland, Marc Lanctot, Jean-Baptiste Lespiau, Bilal Piot, Shayegan Omidshafiei, Edward Lockhart, Laurent Sifre, Nathalie Beauguerlange, Remi Munos, David Silver, Satinder Singh, Demis Hassabis, Karl Tuyls

Figure 1 for Mastering the Game of Stratego with Model-Free Multiagent Reinforcement Learning
Figure 2 for Mastering the Game of Stratego with Model-Free Multiagent Reinforcement Learning
Figure 3 for Mastering the Game of Stratego with Model-Free Multiagent Reinforcement Learning
Figure 4 for Mastering the Game of Stratego with Model-Free Multiagent Reinforcement Learning
Viaarxiv icon

Solving Common-Payoff Games with Approximate Policy Iteration

Jan 11, 2021
Samuel Sokota, Edward Lockhart, Finbarr Timbers, Elnaz Davoodi, Ryan D'Orazio, Neil Burch, Martin Schmid, Michael Bowling, Marc Lanctot

Figure 1 for Solving Common-Payoff Games with Approximate Policy Iteration
Figure 2 for Solving Common-Payoff Games with Approximate Policy Iteration
Figure 3 for Solving Common-Payoff Games with Approximate Policy Iteration
Figure 4 for Solving Common-Payoff Games with Approximate Policy Iteration
Viaarxiv icon

Human-Agent Cooperation in Bridge Bidding

Nov 28, 2020
Edward Lockhart, Neil Burch, Nolan Bard, Sebastian Borgeaud, Tom Eccles, Lucas Smaira, Ray Smith

Figure 1 for Human-Agent Cooperation in Bridge Bidding
Viaarxiv icon

Approximate exploitability: Learning a best response in large games

Apr 20, 2020
Finbarr Timbers, Edward Lockhart, Martin Schmid, Marc Lanctot, Michael Bowling

Figure 1 for Approximate exploitability: Learning a best response in large games
Figure 2 for Approximate exploitability: Learning a best response in large games
Figure 3 for Approximate exploitability: Learning a best response in large games
Figure 4 for Approximate exploitability: Learning a best response in large games
Viaarxiv icon

Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model

Nov 19, 2019
Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, Timothy Lillicrap, David Silver

Figure 1 for Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
Figure 2 for Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
Figure 3 for Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
Figure 4 for Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
Viaarxiv icon

OpenSpiel: A Framework for Reinforcement Learning in Games

Oct 10, 2019
Marc Lanctot, Edward Lockhart, Jean-Baptiste Lespiau, Vinicius Zambaldi, Satyaki Upadhyay, Julien Pérolat, Sriram Srinivasan, Finbarr Timbers, Karl Tuyls, Shayegan Omidshafiei, Daniel Hennes, Dustin Morrill, Paul Muller, Timo Ewalds, Ryan Faulkner, János Kramár, Bart De Vylder, Brennan Saeta, James Bradbury, David Ding, Sebastian Borgeaud, Matthew Lai, Julian Schrittwieser, Thomas Anthony, Edward Hughes, Ivo Danihelka, Jonah Ryan-Davis

Figure 1 for OpenSpiel: A Framework for Reinforcement Learning in Games
Figure 2 for OpenSpiel: A Framework for Reinforcement Learning in Games
Figure 3 for OpenSpiel: A Framework for Reinforcement Learning in Games
Figure 4 for OpenSpiel: A Framework for Reinforcement Learning in Games
Viaarxiv icon

Computing Approximate Equilibria in Sequential Adversarial Games by Exploitability Descent

Mar 21, 2019
Edward Lockhart, Marc Lanctot, Julien Pérolat, Jean-Baptiste Lespiau, Dustin Morrill, Finbarr Timbers, Karl Tuyls

Figure 1 for Computing Approximate Equilibria in Sequential Adversarial Games by Exploitability Descent
Viaarxiv icon

Consistent Jumpy Predictions for Videos and Scenes

Oct 02, 2018
Ananya Kumar, S. M. Ali Eslami, Danilo J. Rezende, Marta Garnelo, Fabio Viola, Edward Lockhart, Murray Shanahan

Figure 1 for Consistent Jumpy Predictions for Videos and Scenes
Figure 2 for Consistent Jumpy Predictions for Videos and Scenes
Figure 3 for Consistent Jumpy Predictions for Videos and Scenes
Figure 4 for Consistent Jumpy Predictions for Videos and Scenes
Viaarxiv icon