Alert button
Picture for Richard Everett

Richard Everett

Alert button

Heterogeneous Social Value Orientation Leads to Meaningful Diversity in Sequential Social Dilemmas

Add code
Bookmark button
Alert button
May 01, 2023
Udari Madhushani, Kevin R. McKee, John P. Agapiou, Joel Z. Leibo, Richard Everett, Thomas Anthony, Edward Hughes, Karl Tuyls, Edgar A. Duéñez-Guzmán

Figure 1 for Heterogeneous Social Value Orientation Leads to Meaningful Diversity in Sequential Social Dilemmas
Figure 2 for Heterogeneous Social Value Orientation Leads to Meaningful Diversity in Sequential Social Dilemmas
Figure 3 for Heterogeneous Social Value Orientation Leads to Meaningful Diversity in Sequential Social Dilemmas
Figure 4 for Heterogeneous Social Value Orientation Leads to Meaningful Diversity in Sequential Social Dilemmas
Viaarxiv icon

Developing, Evaluating and Scaling Learning Agents in Multi-Agent Environments

Add code
Bookmark button
Alert button
Sep 22, 2022
Ian Gemp, Thomas Anthony, Yoram Bachrach, Avishkar Bhoopchand, Kalesha Bullard, Jerome Connor, Vibhavari Dasagi, Bart De Vylder, Edgar Duenez-Guzman, Romuald Elie, Richard Everett, Daniel Hennes, Edward Hughes, Mina Khan, Marc Lanctot, Kate Larson, Guy Lever, Siqi Liu, Luke Marris, Kevin R. McKee, Paul Muller, Julien Perolat, Florian Strub, Andrea Tacchetti, Eugene Tarassov, Zhe Wang, Karl Tuyls

Viaarxiv icon

Stochastic Parallelizable Eigengap Dilation for Large Graph Clustering

Add code
Bookmark button
Alert button
Jul 29, 2022
Elise van der Pol, Ian Gemp, Yoram Bachrach, Richard Everett

Figure 1 for Stochastic Parallelizable Eigengap Dilation for Large Graph Clustering
Figure 2 for Stochastic Parallelizable Eigengap Dilation for Large Graph Clustering
Figure 3 for Stochastic Parallelizable Eigengap Dilation for Large Graph Clustering
Figure 4 for Stochastic Parallelizable Eigengap Dilation for Large Graph Clustering
Viaarxiv icon

Learning Robust Real-Time Cultural Transmission without Human Data

Add code
Bookmark button
Alert button
Mar 01, 2022
Cultural General Intelligence Team, Avishkar Bhoopchand, Bethanie Brownfield, Adrian Collister, Agustin Dal Lago, Ashley Edwards, Richard Everett, Alexandre Frechette, Yanko Gitahy Oliveira, Edward Hughes, Kory W. Mathewson, Piermaria Mendolicchio, Julia Pawar, Miruna Pislar, Alex Platonov, Evan Senter, Sukhdeep Singh, Alexander Zacherl, Lei M. Zhang

Figure 1 for Learning Robust Real-Time Cultural Transmission without Human Data
Figure 2 for Learning Robust Real-Time Cultural Transmission without Human Data
Figure 3 for Learning Robust Real-Time Cultural Transmission without Human Data
Figure 4 for Learning Robust Real-Time Cultural Transmission without Human Data
Viaarxiv icon

Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria

Add code
Bookmark button
Alert button
Jan 05, 2022
Kavya Kopparapu, Edgar A. Duéñez-Guzmán, Jayd Matyas, Alexander Sasha Vezhnevets, John P. Agapiou, Kevin R. McKee, Richard Everett, Janusz Marecki, Joel Z. Leibo, Thore Graepel

Figure 1 for Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria
Figure 2 for Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria
Figure 3 for Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria
Figure 4 for Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria
Viaarxiv icon

Collaborating with Humans without Human Data

Add code
Bookmark button
Alert button
Oct 15, 2021
DJ Strouse, Kevin R. McKee, Matt Botvinick, Edward Hughes, Richard Everett

Figure 1 for Collaborating with Humans without Human Data
Figure 2 for Collaborating with Humans without Human Data
Figure 3 for Collaborating with Humans without Human Data
Figure 4 for Collaborating with Humans without Human Data
Viaarxiv icon

Quantifying environment and population diversity in multi-agent reinforcement learning

Add code
Bookmark button
Alert button
Feb 16, 2021
Kevin R. McKee, Joel Z. Leibo, Charlie Beattie, Richard Everett

Figure 1 for Quantifying environment and population diversity in multi-agent reinforcement learning
Figure 2 for Quantifying environment and population diversity in multi-agent reinforcement learning
Figure 3 for Quantifying environment and population diversity in multi-agent reinforcement learning
Figure 4 for Quantifying environment and population diversity in multi-agent reinforcement learning
Viaarxiv icon

Modelling Cooperation in Network Games with Spatio-Temporal Complexity

Add code
Bookmark button
Alert button
Feb 13, 2021
Michiel A. Bakker, Richard Everett, Laura Weidinger, Iason Gabriel, William S. Isaac, Joel Z. Leibo, Edward Hughes

Figure 1 for Modelling Cooperation in Network Games with Spatio-Temporal Complexity
Figure 2 for Modelling Cooperation in Network Games with Spatio-Temporal Complexity
Figure 3 for Modelling Cooperation in Network Games with Spatio-Temporal Complexity
Figure 4 for Modelling Cooperation in Network Games with Spatio-Temporal Complexity
Viaarxiv icon

Negotiating Team Formation Using Deep Reinforcement Learning

Add code
Bookmark button
Alert button
Oct 20, 2020
Yoram Bachrach, Richard Everett, Edward Hughes, Angeliki Lazaridou, Joel Z. Leibo, Marc Lanctot, Michael Johanson, Wojciech M. Czarnecki, Thore Graepel

Figure 1 for Negotiating Team Formation Using Deep Reinforcement Learning
Figure 2 for Negotiating Team Formation Using Deep Reinforcement Learning
Figure 3 for Negotiating Team Formation Using Deep Reinforcement Learning
Figure 4 for Negotiating Team Formation Using Deep Reinforcement Learning
Viaarxiv icon

Learning to Play No-Press Diplomacy with Best Response Policy Iteration

Add code
Bookmark button
Alert button
Jun 17, 2020
Thomas Anthony, Tom Eccles, Andrea Tacchetti, János Kramár, Ian Gemp, Thomas C. Hudson, Nicolas Porcel, Marc Lanctot, Julien Pérolat, Richard Everett, Satinder Singh, Thore Graepel, Yoram Bachrach

Figure 1 for Learning to Play No-Press Diplomacy with Best Response Policy Iteration
Figure 2 for Learning to Play No-Press Diplomacy with Best Response Policy Iteration
Figure 3 for Learning to Play No-Press Diplomacy with Best Response Policy Iteration
Figure 4 for Learning to Play No-Press Diplomacy with Best Response Policy Iteration
Viaarxiv icon