Picture for Georgios Piliouras

Georgios Piliouras

Online Optimization in Games via Control Theory: Connecting Regret, Passivity and Poincaré Recurrence

Add code
Jun 15, 2021
Figure 1 for Online Optimization in Games via Control Theory: Connecting Regret, Passivity and Poincaré Recurrence
Figure 2 for Online Optimization in Games via Control Theory: Connecting Regret, Passivity and Poincaré Recurrence
Figure 3 for Online Optimization in Games via Control Theory: Connecting Regret, Passivity and Poincaré Recurrence
Figure 4 for Online Optimization in Games via Control Theory: Connecting Regret, Passivity and Poincaré Recurrence
Viaarxiv icon

Efficient Online Learning for Dynamic k-Clustering

Add code
Jun 08, 2021
Figure 1 for Efficient Online Learning for Dynamic k-Clustering
Figure 2 for Efficient Online Learning for Dynamic k-Clustering
Figure 3 for Efficient Online Learning for Dynamic k-Clustering
Figure 4 for Efficient Online Learning for Dynamic k-Clustering
Viaarxiv icon

Global Convergence of Multi-Agent Policy Gradient in Markov Potential Games

Add code
Jun 03, 2021
Figure 1 for Global Convergence of Multi-Agent Policy Gradient in Markov Potential Games
Figure 2 for Global Convergence of Multi-Agent Policy Gradient in Markov Potential Games
Figure 3 for Global Convergence of Multi-Agent Policy Gradient in Markov Potential Games
Figure 4 for Global Convergence of Multi-Agent Policy Gradient in Markov Potential Games
Viaarxiv icon

Learning in Matrix Games can be Arbitrarily Complex

Add code
Mar 05, 2021
Figure 1 for Learning in Matrix Games can be Arbitrarily Complex
Figure 2 for Learning in Matrix Games can be Arbitrarily Complex
Figure 3 for Learning in Matrix Games can be Arbitrarily Complex
Viaarxiv icon

Scaling up Mean Field Games with Online Mirror Descent

Add code
Feb 28, 2021
Figure 1 for Scaling up Mean Field Games with Online Mirror Descent
Figure 2 for Scaling up Mean Field Games with Online Mirror Descent
Figure 3 for Scaling up Mean Field Games with Online Mirror Descent
Figure 4 for Scaling up Mean Field Games with Online Mirror Descent
Viaarxiv icon

Follow-the-Regularized-Leader Routes to Chaos in Routing Games

Add code
Feb 17, 2021
Figure 1 for Follow-the-Regularized-Leader Routes to Chaos in Routing Games
Figure 2 for Follow-the-Regularized-Leader Routes to Chaos in Routing Games
Figure 3 for Follow-the-Regularized-Leader Routes to Chaos in Routing Games
Figure 4 for Follow-the-Regularized-Leader Routes to Chaos in Routing Games
Viaarxiv icon

Solving Min-Max Optimization with Hidden Structure via Gradient Descent Ascent

Add code
Jan 13, 2021
Figure 1 for Solving Min-Max Optimization with Hidden Structure via Gradient Descent Ascent
Figure 2 for Solving Min-Max Optimization with Hidden Structure via Gradient Descent Ascent
Figure 3 for Solving Min-Max Optimization with Hidden Structure via Gradient Descent Ascent
Figure 4 for Solving Min-Max Optimization with Hidden Structure via Gradient Descent Ascent
Viaarxiv icon

Evolutionary Game Theory Squared: Evolving Agents in Endogenously Evolving Zero-Sum Games

Add code
Dec 15, 2020
Figure 1 for Evolutionary Game Theory Squared: Evolving Agents in Endogenously Evolving Zero-Sum Games
Figure 2 for Evolutionary Game Theory Squared: Evolving Agents in Endogenously Evolving Zero-Sum Games
Figure 3 for Evolutionary Game Theory Squared: Evolving Agents in Endogenously Evolving Zero-Sum Games
Figure 4 for Evolutionary Game Theory Squared: Evolving Agents in Endogenously Evolving Zero-Sum Games
Viaarxiv icon

Efficient Online Learning of Optimal Rankings: Dimensionality Reduction via Gradient Descent

Add code
Nov 05, 2020
Figure 1 for Efficient Online Learning of Optimal Rankings: Dimensionality Reduction via Gradient Descent
Viaarxiv icon

No-regret learning and mixed Nash equilibria: They do not mix

Add code
Oct 20, 2020
Figure 1 for No-regret learning and mixed Nash equilibria: They do not mix
Figure 2 for No-regret learning and mixed Nash equilibria: They do not mix
Figure 3 for No-regret learning and mixed Nash equilibria: They do not mix
Figure 4 for No-regret learning and mixed Nash equilibria: They do not mix
Viaarxiv icon