Alert button
Picture for Jakob Foerster

Jakob Foerster

Alert button

Centralized Model and Exploration Policy for Multi-Agent RL

Jul 14, 2021
Qizhen Zhang, Chris Lu, Animesh Garg, Jakob Foerster

Figure 1 for Centralized Model and Exploration Policy for Multi-Agent RL
Figure 2 for Centralized Model and Exploration Policy for Multi-Agent RL
Figure 3 for Centralized Model and Exploration Policy for Multi-Agent RL
Figure 4 for Centralized Model and Exploration Policy for Multi-Agent RL
Viaarxiv icon

A New Formalism, Method and Open Issues for Zero-Shot Coordination

Jul 06, 2021
Johannes Treutlein, Michael Dennis, Caspar Oesterheld, Jakob Foerster

Figure 1 for A New Formalism, Method and Open Issues for Zero-Shot Coordination
Figure 2 for A New Formalism, Method and Open Issues for Zero-Shot Coordination
Figure 3 for A New Formalism, Method and Open Issues for Zero-Shot Coordination
Figure 4 for A New Formalism, Method and Open Issues for Zero-Shot Coordination
Viaarxiv icon

Learned Belief Search: Efficiently Improving Policies in Partially Observable Settings

Jun 16, 2021
Hengyuan Hu, Adam Lerer, Noam Brown, Jakob Foerster

Figure 1 for Learned Belief Search: Efficiently Improving Policies in Partially Observable Settings
Figure 2 for Learned Belief Search: Efficiently Improving Policies in Partially Observable Settings
Figure 3 for Learned Belief Search: Efficiently Improving Policies in Partially Observable Settings
Figure 4 for Learned Belief Search: Efficiently Improving Policies in Partially Observable Settings
Viaarxiv icon

Quasi-Equivalence Discovery for Zero-Shot Emergent Communication

Mar 14, 2021
Kalesha Bullard, Douwe Kiela, Joelle Pineau, Jakob Foerster

Figure 1 for Quasi-Equivalence Discovery for Zero-Shot Emergent Communication
Figure 2 for Quasi-Equivalence Discovery for Zero-Shot Emergent Communication
Figure 3 for Quasi-Equivalence Discovery for Zero-Shot Emergent Communication
Figure 4 for Quasi-Equivalence Discovery for Zero-Shot Emergent Communication
Viaarxiv icon

Off-Belief Learning

Mar 06, 2021
Hengyuan Hu, Adam Lerer, Brandon Cui, Luis Pineda, David Wu, Noam Brown, Jakob Foerster

Figure 1 for Off-Belief Learning
Figure 2 for Off-Belief Learning
Figure 3 for Off-Belief Learning
Figure 4 for Off-Belief Learning
Viaarxiv icon

Ridge Rider: Finding Diverse Solutions by Following Eigenvectors of the Hessian

Nov 12, 2020
Jack Parker-Holder, Luke Metz, Cinjon Resnick, Hengyuan Hu, Adam Lerer, Alistair Letcher, Alex Peysakhovich, Aldo Pacchiano, Jakob Foerster

Figure 1 for Ridge Rider: Finding Diverse Solutions by Following Eigenvectors of the Hessian
Figure 2 for Ridge Rider: Finding Diverse Solutions by Following Eigenvectors of the Hessian
Figure 3 for Ridge Rider: Finding Diverse Solutions by Following Eigenvectors of the Hessian
Figure 4 for Ridge Rider: Finding Diverse Solutions by Following Eigenvectors of the Hessian
Viaarxiv icon

Exploring Zero-Shot Emergent Communication in Embodied Multi-Agent Populations

Oct 29, 2020
Kalesha Bullard, Franziska Meier, Douwe Kiela, Joelle Pineau, Jakob Foerster

Figure 1 for Exploring Zero-Shot Emergent Communication in Embodied Multi-Agent Populations
Figure 2 for Exploring Zero-Shot Emergent Communication in Embodied Multi-Agent Populations
Figure 3 for Exploring Zero-Shot Emergent Communication in Embodied Multi-Agent Populations
Figure 4 for Exploring Zero-Shot Emergent Communication in Embodied Multi-Agent Populations
Viaarxiv icon

The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets

Sep 23, 2020
Oana-Maria Camburu, Eleonora Giunchiglia, Jakob Foerster, Thomas Lukasiewicz, Phil Blunsom

Figure 1 for The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets
Figure 2 for The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets
Viaarxiv icon

Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning

Mar 19, 2020
Tabish Rashid, Mikayel Samvelyan, Christian Schroeder de Witt, Gregory Farquhar, Jakob Foerster, Shimon Whiteson

Figure 1 for Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning
Figure 2 for Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning
Figure 3 for Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning
Figure 4 for Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning
Viaarxiv icon