Alert button
Picture for Thore Graepel

Thore Graepel

Alert button

Game Theoretic Rating in N-player general-sum games with Equilibria

Add code
Bookmark button
Alert button
Oct 05, 2022
Luke Marris, Marc Lanctot, Ian Gemp, Shayegan Omidshafiei, Stephen McAleer, Jerome Connor, Karl Tuyls, Thore Graepel

Figure 1 for Game Theoretic Rating in N-player general-sum games with Equilibria
Figure 2 for Game Theoretic Rating in N-player general-sum games with Equilibria
Figure 3 for Game Theoretic Rating in N-player general-sum games with Equilibria
Figure 4 for Game Theoretic Rating in N-player general-sum games with Equilibria
Viaarxiv icon

NeuPL: Neural Population Learning

Add code
Bookmark button
Alert button
Feb 15, 2022
Siqi Liu, Luke Marris, Daniel Hennes, Josh Merel, Nicolas Heess, Thore Graepel

Figure 1 for NeuPL: Neural Population Learning
Figure 2 for NeuPL: Neural Population Learning
Figure 3 for NeuPL: Neural Population Learning
Figure 4 for NeuPL: Neural Population Learning
Viaarxiv icon

Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria

Add code
Bookmark button
Alert button
Jan 05, 2022
Kavya Kopparapu, Edgar A. Duéñez-Guzmán, Jayd Matyas, Alexander Sasha Vezhnevets, John P. Agapiou, Kevin R. McKee, Richard Everett, Janusz Marecki, Joel Z. Leibo, Thore Graepel

Figure 1 for Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria
Figure 2 for Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria
Figure 3 for Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria
Figure 4 for Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria
Viaarxiv icon

A PAC-Bayesian Analysis of Distance-Based Classifiers: Why Nearest-Neighbour works!

Add code
Bookmark button
Alert button
Sep 28, 2021
Thore Graepel, Ralf Herbrich

Figure 1 for A PAC-Bayesian Analysis of Distance-Based Classifiers: Why Nearest-Neighbour works!
Figure 2 for A PAC-Bayesian Analysis of Distance-Based Classifiers: Why Nearest-Neighbour works!
Figure 3 for A PAC-Bayesian Analysis of Distance-Based Classifiers: Why Nearest-Neighbour works!
Viaarxiv icon

Scalable Evaluation of Multi-Agent Reinforcement Learning with Melting Pot

Add code
Bookmark button
Alert button
Jul 14, 2021
Joel Z. Leibo, Edgar Duéñez-Guzmán, Alexander Sasha Vezhnevets, John P. Agapiou, Peter Sunehag, Raphael Koster, Jayd Matyas, Charles Beattie, Igor Mordatch, Thore Graepel

Figure 1 for Scalable Evaluation of Multi-Agent Reinforcement Learning with Melting Pot
Figure 2 for Scalable Evaluation of Multi-Agent Reinforcement Learning with Melting Pot
Figure 3 for Scalable Evaluation of Multi-Agent Reinforcement Learning with Melting Pot
Figure 4 for Scalable Evaluation of Multi-Agent Reinforcement Learning with Melting Pot
Viaarxiv icon

Multi-Agent Training beyond Zero-Sum with Correlated Equilibrium Meta-Solvers

Add code
Bookmark button
Alert button
Jun 22, 2021
Luke Marris, Paul Muller, Marc Lanctot, Karl Tuyls, Thore Graepel

Figure 1 for Multi-Agent Training beyond Zero-Sum with Correlated Equilibrium Meta-Solvers
Figure 2 for Multi-Agent Training beyond Zero-Sum with Correlated Equilibrium Meta-Solvers
Figure 3 for Multi-Agent Training beyond Zero-Sum with Correlated Equilibrium Meta-Solvers
Figure 4 for Multi-Agent Training beyond Zero-Sum with Correlated Equilibrium Meta-Solvers
Viaarxiv icon

From Motor Control to Team Play in Simulated Humanoid Football

Add code
Bookmark button
Alert button
May 25, 2021
Siqi Liu, Guy Lever, Zhe Wang, Josh Merel, S. M. Ali Eslami, Daniel Hennes, Wojciech M. Czarnecki, Yuval Tassa, Shayegan Omidshafiei, Abbas Abdolmaleki, Noah Y. Siegel, Leonard Hasenclever, Luke Marris, Saran Tunyasuvunakool, H. Francis Song, Markus Wulfmeier, Paul Muller, Tuomas Haarnoja, Brendan D. Tracey, Karl Tuyls, Thore Graepel, Nicolas Heess

Figure 1 for From Motor Control to Team Play in Simulated Humanoid Football
Figure 2 for From Motor Control to Team Play in Simulated Humanoid Football
Figure 3 for From Motor Control to Team Play in Simulated Humanoid Football
Figure 4 for From Motor Control to Team Play in Simulated Humanoid Football
Viaarxiv icon

Deep reinforcement learning models the emergent dynamics of human cooperation

Add code
Bookmark button
Alert button
Mar 08, 2021
Kevin R. McKee, Edward Hughes, Tina O. Zhu, Martin J. Chadwick, Raphael Koster, Antonio Garcia Castaneda, Charlie Beattie, Thore Graepel, Matt Botvinick, Joel Z. Leibo

Figure 1 for Deep reinforcement learning models the emergent dynamics of human cooperation
Figure 2 for Deep reinforcement learning models the emergent dynamics of human cooperation
Figure 3 for Deep reinforcement learning models the emergent dynamics of human cooperation
Viaarxiv icon

EigenGame Unloaded: When playing games is better than optimizing

Add code
Bookmark button
Alert button
Feb 08, 2021
Ian Gemp, Brian McWilliams, Claire Vernade, Thore Graepel

Figure 1 for EigenGame Unloaded: When playing games is better than optimizing
Figure 2 for EigenGame Unloaded: When playing games is better than optimizing
Figure 3 for EigenGame Unloaded: When playing games is better than optimizing
Figure 4 for EigenGame Unloaded: When playing games is better than optimizing
Viaarxiv icon

Open Problems in Cooperative AI

Add code
Bookmark button
Alert button
Dec 15, 2020
Allan Dafoe, Edward Hughes, Yoram Bachrach, Tantum Collins, Kevin R. McKee, Joel Z. Leibo, Kate Larson, Thore Graepel

Figure 1 for Open Problems in Cooperative AI
Figure 2 for Open Problems in Cooperative AI
Figure 3 for Open Problems in Cooperative AI
Figure 4 for Open Problems in Cooperative AI
Viaarxiv icon