Alert button
Picture for Ian Gemp

Ian Gemp

Alert button

EigenGame: PCA as a Nash Equilibrium

Add code
Bookmark button
Alert button
Oct 01, 2020
Ian Gemp, Brian McWilliams, Claire Vernade, Thore Graepel

Figure 1 for EigenGame: PCA as a Nash Equilibrium
Figure 2 for EigenGame: PCA as a Nash Equilibrium
Figure 3 for EigenGame: PCA as a Nash Equilibrium
Figure 4 for EigenGame: PCA as a Nash Equilibrium
Viaarxiv icon

Learning to Play No-Press Diplomacy with Best Response Policy Iteration

Add code
Bookmark button
Alert button
Jun 17, 2020
Thomas Anthony, Tom Eccles, Andrea Tacchetti, János Kramár, Ian Gemp, Thomas C. Hudson, Nicolas Porcel, Marc Lanctot, Julien Pérolat, Richard Everett, Satinder Singh, Thore Graepel, Yoram Bachrach

Figure 1 for Learning to Play No-Press Diplomacy with Best Response Policy Iteration
Figure 2 for Learning to Play No-Press Diplomacy with Best Response Policy Iteration
Figure 3 for Learning to Play No-Press Diplomacy with Best Response Policy Iteration
Figure 4 for Learning to Play No-Press Diplomacy with Best Response Policy Iteration
Viaarxiv icon

Proximal Gradient Temporal Difference Learning: Stable Reinforcement Learning with Polynomial Sample Complexity

Add code
Bookmark button
Alert button
Jun 06, 2020
Bo Liu, Ian Gemp, Mohammad Ghavamzadeh, Ji Liu, Sridhar Mahadevan, Marek Petrik

Figure 1 for Proximal Gradient Temporal Difference Learning: Stable Reinforcement Learning with Polynomial Sample Complexity
Figure 2 for Proximal Gradient Temporal Difference Learning: Stable Reinforcement Learning with Polynomial Sample Complexity
Figure 3 for Proximal Gradient Temporal Difference Learning: Stable Reinforcement Learning with Polynomial Sample Complexity
Figure 4 for Proximal Gradient Temporal Difference Learning: Stable Reinforcement Learning with Polynomial Sample Complexity
Viaarxiv icon

Social diversity and social preferences in mixed-motive reinforcement learning

Add code
Bookmark button
Alert button
Feb 12, 2020
Kevin R. McKee, Ian Gemp, Brian McWilliams, Edgar A. Duéñez-Guzmán, Edward Hughes, Joel Z. Leibo

Figure 1 for Social diversity and social preferences in mixed-motive reinforcement learning
Figure 2 for Social diversity and social preferences in mixed-motive reinforcement learning
Figure 3 for Social diversity and social preferences in mixed-motive reinforcement learning
Figure 4 for Social diversity and social preferences in mixed-motive reinforcement learning
Viaarxiv icon

Social Diversity and Social Preferences in Mixed-Motive Reinforcement Learning

Add code
Bookmark button
Alert button
Feb 06, 2020
Kevin R. McKee, Ian Gemp, Brian McWilliams, Edgar A. Duéñez-Guzmán, Edward Hughes, Joel Z. Leibo

Figure 1 for Social Diversity and Social Preferences in Mixed-Motive Reinforcement Learning
Figure 2 for Social Diversity and Social Preferences in Mixed-Motive Reinforcement Learning
Figure 3 for Social Diversity and Social Preferences in Mixed-Motive Reinforcement Learning
Figure 4 for Social Diversity and Social Preferences in Mixed-Motive Reinforcement Learning
Viaarxiv icon

Global Convergence to the Equilibrium of GANs using Variational Inequalities

Add code
Bookmark button
Alert button
Sep 11, 2018
Ian Gemp, Sridhar Mahadevan

Figure 1 for Global Convergence to the Equilibrium of GANs using Variational Inequalities
Figure 2 for Global Convergence to the Equilibrium of GANs using Variational Inequalities
Figure 3 for Global Convergence to the Equilibrium of GANs using Variational Inequalities
Figure 4 for Global Convergence to the Equilibrium of GANs using Variational Inequalities
Viaarxiv icon

Online Monotone Games

Add code
Bookmark button
Alert button
Oct 19, 2017
Ian Gemp, Sridhar Mahadevan

Figure 1 for Online Monotone Games
Figure 2 for Online Monotone Games
Figure 3 for Online Monotone Games
Figure 4 for Online Monotone Games
Viaarxiv icon

Inverting Variational Autoencoders for Improved Generative Accuracy

Add code
Bookmark button
Alert button
Aug 24, 2017
Ian Gemp, Ishan Durugkar, Mario Parente, M. Darby Dyar, Sridhar Mahadevan

Figure 1 for Inverting Variational Autoencoders for Improved Generative Accuracy
Figure 2 for Inverting Variational Autoencoders for Improved Generative Accuracy
Figure 3 for Inverting Variational Autoencoders for Improved Generative Accuracy
Figure 4 for Inverting Variational Autoencoders for Improved Generative Accuracy
Viaarxiv icon

Generative Multi-Adversarial Networks

Add code
Bookmark button
Alert button
Mar 02, 2017
Ishan Durugkar, Ian Gemp, Sridhar Mahadevan

Figure 1 for Generative Multi-Adversarial Networks
Figure 2 for Generative Multi-Adversarial Networks
Figure 3 for Generative Multi-Adversarial Networks
Figure 4 for Generative Multi-Adversarial Networks
Viaarxiv icon