Alert button
Picture for Jakob Nicolaus Foerster

Jakob Nicolaus Foerster

Alert button

ReLU to the Rescue: Improve Your On-Policy Actor-Critic with Positive Advantages

Jun 12, 2023
Andrew Jesson, Chris Lu, Gunshi Gupta, Angelos Filos, Jakob Nicolaus Foerster, Yarin Gal

Figure 1 for ReLU to the Rescue: Improve Your On-Policy Actor-Critic with Positive Advantages
Figure 2 for ReLU to the Rescue: Improve Your On-Policy Actor-Critic with Positive Advantages
Figure 3 for ReLU to the Rescue: Improve Your On-Policy Actor-Critic with Positive Advantages
Figure 4 for ReLU to the Rescue: Improve Your On-Policy Actor-Critic with Positive Advantages

This paper introduces a novel method for enhancing the effectiveness of on-policy Deep Reinforcement Learning (DRL) algorithms. Three surprisingly simple modifications to the A3C algorithm: (1) processing advantage estimates through a ReLU function, (2) spectral normalization, and (3) dropout, serve to not only improve efficacy but also yield a ``cautious'' DRL algorithm. Where on-policy algorithms such as Proximal Policy Optimization (PPO) and Asynchronous Advantage Actor-Critic (A3C) do not explicitly account for cautious interaction with the environment, our method integrates caution in two critical ways: (1) by maximizing a lower bound on the value function plus a constant, thereby promoting a \textit{conservative value estimation}, and (2) by incorporating Thompson sampling for cautious exploration. In proving that our algorithm maximizes the lower bound, we also ground Regret Matching Policy Gradients (RMPG), a discrete-action on-policy method for multi-agent reinforcement learning. Our rigorous empirical evaluations across various benchmarks demonstrate our approach's improved performance against existing on-policy algorithms. This research represents a substantial step towards efficacious and cautious DRL algorithms, which are needed to unlock applications to complex, real-world problems.

Viaarxiv icon

Cheap Talk Discovery and Utilization in Multi-Agent Reinforcement Learning

Mar 19, 2023
Yat Long Lo, Christian Schroeder de Witt, Samuel Sokota, Jakob Nicolaus Foerster, Shimon Whiteson

Figure 1 for Cheap Talk Discovery and Utilization in Multi-Agent Reinforcement Learning
Figure 2 for Cheap Talk Discovery and Utilization in Multi-Agent Reinforcement Learning
Figure 3 for Cheap Talk Discovery and Utilization in Multi-Agent Reinforcement Learning
Figure 4 for Cheap Talk Discovery and Utilization in Multi-Agent Reinforcement Learning

By enabling agents to communicate, recent cooperative multi-agent reinforcement learning (MARL) methods have demonstrated better task performance and more coordinated behavior. Most existing approaches facilitate inter-agent communication by allowing agents to send messages to each other through free communication channels, i.e., cheap talk channels. Current methods require these channels to be constantly accessible and known to the agents a priori. In this work, we lift these requirements such that the agents must discover the cheap talk channels and learn how to use them. Hence, the problem has two main parts: cheap talk discovery (CTD) and cheap talk utilization (CTU). We introduce a novel conceptual framework for both parts and develop a new algorithm based on mutual information maximization that outperforms existing algorithms in CTD/CTU settings. We also release a novel benchmark suite to stimulate future research in CTD/CTU.

* The 11th International Conference on Learning Representations (ICLR) 
Viaarxiv icon

Proximal Learning With Opponent-Learning Awareness

Oct 18, 2022
Stephen Zhao, Chris Lu, Roger Baker Grosse, Jakob Nicolaus Foerster

Figure 1 for Proximal Learning With Opponent-Learning Awareness
Figure 2 for Proximal Learning With Opponent-Learning Awareness
Figure 3 for Proximal Learning With Opponent-Learning Awareness
Figure 4 for Proximal Learning With Opponent-Learning Awareness

Learning With Opponent-Learning Awareness (LOLA) (Foerster et al. [2018a]) is a multi-agent reinforcement learning algorithm that typically learns reciprocity-based cooperation in partially competitive environments. However, LOLA often fails to learn such behaviour on more complex policy spaces parameterized by neural networks, partly because the update rule is sensitive to the policy parameterization. This problem is especially pronounced in the opponent modeling setting, where the opponent's policy is unknown and must be inferred from observations; in such settings, LOLA is ill-specified because behaviorally equivalent opponent policies can result in non-equivalent updates. To address this shortcoming, we reinterpret LOLA as approximating a proximal operator, and then derive a new algorithm, proximal LOLA (POLA), which uses the proximal formulation directly. Unlike LOLA, the POLA updates are parameterization invariant, in the sense that when the proximal objective has a unique optimum, behaviorally equivalent policies result in behaviorally equivalent updates. We then present practical approximations to the ideal POLA update, which we evaluate in several partially competitive environments with function approximation and opponent modeling. This empirically demonstrates that POLA achieves reciprocity-based cooperation more reliably than LOLA.

* 24 pages (10 pages main paper), 5 figures, to be published in 36th Conference on Neural Information Processing Systems (NeurIPS 2022) 
Viaarxiv icon