Alert button
Picture for Julien Perolat

Julien Perolat

Alert button

From Poincaré Recurrence to Convergence in Imperfect Information Games: Finding Equilibrium via Regularization

Add code
Bookmark button
Alert button
Feb 19, 2020
Julien Perolat, Remi Munos, Jean-Baptiste Lespiau, Shayegan Omidshafiei, Mark Rowland, Pedro Ortega, Neil Burch, Thomas Anthony, David Balduzzi, Bart De Vylder, Georgios Piliouras, Marc Lanctot, Karl Tuyls

Figure 1 for From Poincaré Recurrence to Convergence in Imperfect Information Games: Finding Equilibrium via Regularization
Figure 2 for From Poincaré Recurrence to Convergence in Imperfect Information Games: Finding Equilibrium via Regularization
Figure 3 for From Poincaré Recurrence to Convergence in Imperfect Information Games: Finding Equilibrium via Regularization
Figure 4 for From Poincaré Recurrence to Convergence in Imperfect Information Games: Finding Equilibrium via Regularization
Viaarxiv icon

Multiagent Evaluation under Incomplete Information

Add code
Bookmark button
Alert button
Oct 30, 2019
Mark Rowland, Shayegan Omidshafiei, Karl Tuyls, Julien Perolat, Michal Valko, Georgios Piliouras, Remi Munos

Figure 1 for Multiagent Evaluation under Incomplete Information
Figure 2 for Multiagent Evaluation under Incomplete Information
Figure 3 for Multiagent Evaluation under Incomplete Information
Figure 4 for Multiagent Evaluation under Incomplete Information
Viaarxiv icon

A Generalized Training Approach for Multiagent Learning

Add code
Bookmark button
Alert button
Sep 27, 2019
Paul Muller, Shayegan Omidshafiei, Mark Rowland, Karl Tuyls, Julien Perolat, Siqi Liu, Daniel Hennes, Luke Marris, Marc Lanctot, Edward Hughes, Zhe Wang, Guy Lever, Nicolas Heess, Thore Graepel, Remi Munos

Figure 1 for A Generalized Training Approach for Multiagent Learning
Figure 2 for A Generalized Training Approach for Multiagent Learning
Figure 3 for A Generalized Training Approach for Multiagent Learning
Figure 4 for A Generalized Training Approach for Multiagent Learning
Viaarxiv icon

Foolproof Cooperative Learning

Add code
Bookmark button
Alert button
Jun 24, 2019
Alexis Jacq, Julien Perolat, Matthieu Geist, Olivier Pietquin

Figure 1 for Foolproof Cooperative Learning
Figure 2 for Foolproof Cooperative Learning
Figure 3 for Foolproof Cooperative Learning
Viaarxiv icon

Neural Replicator Dynamics

Add code
Bookmark button
Alert button
Jun 01, 2019
Shayegan Omidshafiei, Daniel Hennes, Dustin Morrill, Remi Munos, Julien Perolat, Marc Lanctot, Audrunas Gruslys, Jean-Baptiste Lespiau, Karl Tuyls

Figure 1 for Neural Replicator Dynamics
Figure 2 for Neural Replicator Dynamics
Figure 3 for Neural Replicator Dynamics
Figure 4 for Neural Replicator Dynamics
Viaarxiv icon

Open-ended Learning in Symmetric Zero-sum Games

Add code
Bookmark button
Alert button
Jan 23, 2019
David Balduzzi, Marta Garnelo, Yoram Bachrach, Wojciech M. Czarnecki, Julien Perolat, Max Jaderberg, Thore Graepel

Figure 1 for Open-ended Learning in Symmetric Zero-sum Games
Figure 2 for Open-ended Learning in Symmetric Zero-sum Games
Figure 3 for Open-ended Learning in Symmetric Zero-sum Games
Figure 4 for Open-ended Learning in Symmetric Zero-sum Games
Viaarxiv icon

Malthusian Reinforcement Learning

Add code
Bookmark button
Alert button
Dec 17, 2018
Joel Z. Leibo, Julien Perolat, Edward Hughes, Steven Wheelwright, Adam H. Marblestone, Edgar Duéñez-Guzmán, Peter Sunehag, Iain Dunning, Thore Graepel

Figure 1 for Malthusian Reinforcement Learning
Figure 2 for Malthusian Reinforcement Learning
Figure 3 for Malthusian Reinforcement Learning
Figure 4 for Malthusian Reinforcement Learning
Viaarxiv icon

Re-evaluating Evaluation

Add code
Bookmark button
Alert button
Oct 30, 2018
David Balduzzi, Karl Tuyls, Julien Perolat, Thore Graepel

Figure 1 for Re-evaluating Evaluation
Figure 2 for Re-evaluating Evaluation
Figure 3 for Re-evaluating Evaluation
Figure 4 for Re-evaluating Evaluation
Viaarxiv icon

Actor-Critic Policy Optimization in Partially Observable Multiagent Environments

Add code
Bookmark button
Alert button
Oct 21, 2018
Sriram Srinivasan, Marc Lanctot, Vinicius Zambaldi, Julien Perolat, Karl Tuyls, Remi Munos, Michael Bowling

Figure 1 for Actor-Critic Policy Optimization in Partially Observable Multiagent Environments
Figure 2 for Actor-Critic Policy Optimization in Partially Observable Multiagent Environments
Figure 3 for Actor-Critic Policy Optimization in Partially Observable Multiagent Environments
Figure 4 for Actor-Critic Policy Optimization in Partially Observable Multiagent Environments
Viaarxiv icon