Alert button
Picture for Joel Z. Leibo

Joel Z. Leibo

Alert button

Open Problems in Cooperative AI

Add code
Bookmark button
Alert button
Dec 15, 2020
Allan Dafoe, Edward Hughes, Yoram Bachrach, Tantum Collins, Kevin R. McKee, Joel Z. Leibo, Kate Larson, Thore Graepel

Figure 1 for Open Problems in Cooperative AI
Figure 2 for Open Problems in Cooperative AI
Figure 3 for Open Problems in Cooperative AI
Figure 4 for Open Problems in Cooperative AI
Viaarxiv icon

DeepMind Lab2D

Add code
Bookmark button
Alert button
Dec 12, 2020
Charles Beattie, Thomas Köppe, Edgar A. Duéñez-Guzmán, Joel Z. Leibo

Figure 1 for DeepMind Lab2D
Figure 2 for DeepMind Lab2D
Viaarxiv icon

Negotiating Team Formation Using Deep Reinforcement Learning

Add code
Bookmark button
Alert button
Oct 20, 2020
Yoram Bachrach, Richard Everett, Edward Hughes, Angeliki Lazaridou, Joel Z. Leibo, Marc Lanctot, Michael Johanson, Wojciech M. Czarnecki, Thore Graepel

Figure 1 for Negotiating Team Formation Using Deep Reinforcement Learning
Figure 2 for Negotiating Team Formation Using Deep Reinforcement Learning
Figure 3 for Negotiating Team Formation Using Deep Reinforcement Learning
Figure 4 for Negotiating Team Formation Using Deep Reinforcement Learning
Viaarxiv icon

Learning to Resolve Alliance Dilemmas in Many-Player Zero-Sum Games

Add code
Bookmark button
Alert button
Feb 27, 2020
Edward Hughes, Thomas W. Anthony, Tom Eccles, Joel Z. Leibo, David Balduzzi, Yoram Bachrach

Figure 1 for Learning to Resolve Alliance Dilemmas in Many-Player Zero-Sum Games
Figure 2 for Learning to Resolve Alliance Dilemmas in Many-Player Zero-Sum Games
Figure 3 for Learning to Resolve Alliance Dilemmas in Many-Player Zero-Sum Games
Figure 4 for Learning to Resolve Alliance Dilemmas in Many-Player Zero-Sum Games
Viaarxiv icon

Social diversity and social preferences in mixed-motive reinforcement learning

Add code
Bookmark button
Alert button
Feb 12, 2020
Kevin R. McKee, Ian Gemp, Brian McWilliams, Edgar A. Duéñez-Guzmán, Edward Hughes, Joel Z. Leibo

Figure 1 for Social diversity and social preferences in mixed-motive reinforcement learning
Figure 2 for Social diversity and social preferences in mixed-motive reinforcement learning
Figure 3 for Social diversity and social preferences in mixed-motive reinforcement learning
Figure 4 for Social diversity and social preferences in mixed-motive reinforcement learning
Viaarxiv icon

Social Diversity and Social Preferences in Mixed-Motive Reinforcement Learning

Add code
Bookmark button
Alert button
Feb 06, 2020
Kevin R. McKee, Ian Gemp, Brian McWilliams, Edgar A. Duéñez-Guzmán, Edward Hughes, Joel Z. Leibo

Figure 1 for Social Diversity and Social Preferences in Mixed-Motive Reinforcement Learning
Figure 2 for Social Diversity and Social Preferences in Mixed-Motive Reinforcement Learning
Figure 3 for Social Diversity and Social Preferences in Mixed-Motive Reinforcement Learning
Figure 4 for Social Diversity and Social Preferences in Mixed-Motive Reinforcement Learning
Viaarxiv icon

Silly rules improve the capacity of agents to learn stable enforcement and compliance behaviors

Add code
Bookmark button
Alert button
Jan 25, 2020
Raphael Köster, Dylan Hadfield-Menell, Gillian K. Hadfield, Joel Z. Leibo

Figure 1 for Silly rules improve the capacity of agents to learn stable enforcement and compliance behaviors
Figure 2 for Silly rules improve the capacity of agents to learn stable enforcement and compliance behaviors
Figure 3 for Silly rules improve the capacity of agents to learn stable enforcement and compliance behaviors
Figure 4 for Silly rules improve the capacity of agents to learn stable enforcement and compliance behaviors
Viaarxiv icon

Options as responses: Grounding behavioural hierarchies in multi-agent RL

Add code
Bookmark button
Alert button
Jun 06, 2019
Alexander Sasha Vezhnevets, Yuhuai Wu, Remi Leblond, Joel Z. Leibo

Figure 1 for Options as responses: Grounding behavioural hierarchies in multi-agent RL
Figure 2 for Options as responses: Grounding behavioural hierarchies in multi-agent RL
Figure 3 for Options as responses: Grounding behavioural hierarchies in multi-agent RL
Figure 4 for Options as responses: Grounding behavioural hierarchies in multi-agent RL
Viaarxiv icon

Interval timing in deep reinforcement learning agents

Add code
Bookmark button
Alert button
May 31, 2019
Ben Deverett, Ryan Faulkner, Meire Fortunato, Greg Wayne, Joel Z. Leibo

Figure 1 for Interval timing in deep reinforcement learning agents
Figure 2 for Interval timing in deep reinforcement learning agents
Figure 3 for Interval timing in deep reinforcement learning agents
Figure 4 for Interval timing in deep reinforcement learning agents
Viaarxiv icon