Alert button
Picture for Daewoo Kim

Daewoo Kim

Alert button

Symphony: Learning Realistic and Diverse Agents for Autonomous Driving Simulation

May 06, 2022
Maximilian Igl, Daewoo Kim, Alex Kuefler, Paul Mougin, Punit Shah, Kyriacos Shiarlis, Dragomir Anguelov, Mark Palatucci, Brandyn White, Shimon Whiteson

Figure 1 for Symphony: Learning Realistic and Diverse Agents for Autonomous Driving Simulation
Figure 2 for Symphony: Learning Realistic and Diverse Agents for Autonomous Driving Simulation
Figure 3 for Symphony: Learning Realistic and Diverse Agents for Autonomous Driving Simulation
Figure 4 for Symphony: Learning Realistic and Diverse Agents for Autonomous Driving Simulation

Simulation is a crucial tool for accelerating the development of autonomous vehicles. Making simulation realistic requires models of the human road users who interact with such cars. Such models can be obtained by applying learning from demonstration (LfD) to trajectories observed by cars already on the road. However, existing LfD methods are typically insufficient, yielding policies that frequently collide or drive off the road. To address this problem, we propose Symphony, which greatly improves realism by combining conventional policies with a parallel beam search. The beam search refines these policies on the fly by pruning branches that are unfavourably evaluated by a discriminator. However, it can also harm diversity, i.e., how well the agents cover the entire distribution of realistic behaviour, as pruning can encourage mode collapse. Symphony addresses this issue with a hierarchical approach, factoring agent behaviour into goal generation and goal conditioning. The use of such goals ensures that agent diversity neither disappears during adversarial training nor is pruned away by the beam search. Experiments on both proprietary and open Waymo datasets confirm that Symphony agents learn more realistic and diverse behaviour than several baselines.

* Accepted to ICRA-2022 
Viaarxiv icon

QTRAN: Learning to Factorize with Transformation for Cooperative Multi-Agent Reinforcement Learning

May 14, 2019
Kyunghwan Son, Daewoo Kim, Wan Ju Kang, David Earl Hostallero, Yung Yi

Figure 1 for QTRAN: Learning to Factorize with Transformation for Cooperative Multi-Agent Reinforcement Learning
Figure 2 for QTRAN: Learning to Factorize with Transformation for Cooperative Multi-Agent Reinforcement Learning
Figure 3 for QTRAN: Learning to Factorize with Transformation for Cooperative Multi-Agent Reinforcement Learning
Figure 4 for QTRAN: Learning to Factorize with Transformation for Cooperative Multi-Agent Reinforcement Learning

We explore value-based solutions for multi-agent reinforcement learning (MARL) tasks in the centralized training with decentralized execution (CTDE) regime popularized recently. However, VDN and QMIX are representative examples that use the idea of factorization of the joint action-value function into individual ones for decentralized execution. VDN and QMIX address only a fraction of factorizable MARL tasks due to their structural constraint in factorization such as additivity and monotonicity. In this paper, we propose a new factorization method for MARL, QTRAN, which is free from such structural constraints and takes on a new approach to transforming the original joint action-value function into an easily factorizable one, with the same optimal actions. QTRAN guarantees more general factorization than VDN or QMIX, thus covering a much wider class of MARL tasks than does previous methods. Our experiments for the tasks of multi-domain Gaussian-squeeze and modified predator-prey demonstrate QTRAN's superior performance with especially larger margins in games whose payoffs penalize non-cooperative behavior more aggressively.

* 18 pages; Accepted to ICML 2019 
Viaarxiv icon

Learning to Schedule Communication in Multi-agent Reinforcement Learning

Feb 05, 2019
Daewoo Kim, Sangwoo Moon, David Hostallero, Wan Ju Kang, Taeyoung Lee, Kyunghwan Son, Yung Yi

Figure 1 for Learning to Schedule Communication in Multi-agent Reinforcement Learning
Figure 2 for Learning to Schedule Communication in Multi-agent Reinforcement Learning
Figure 3 for Learning to Schedule Communication in Multi-agent Reinforcement Learning
Figure 4 for Learning to Schedule Communication in Multi-agent Reinforcement Learning

Many real-world reinforcement learning tasks require multiple agents to make sequential decisions under the agents' interaction, where well-coordinated actions among the agents are crucial to achieve the target goal better at these tasks. One way to accelerate the coordination effect is to enable multiple agents to communicate with each other in a distributed manner and behave as a group. In this paper, we study a practical scenario when (i) the communication bandwidth is limited and (ii) the agents share the communication medium so that only a restricted number of agents are able to simultaneously use the medium, as in the state-of-the-art wireless networking standards. This calls for a certain form of communication scheduling. In that regard, we propose a multi-agent deep reinforcement learning framework, called SchedNet, in which agents learn how to schedule themselves, how to encode the messages, and how to select actions based on received messages. SchedNet is capable of deciding which agents should be entitled to broadcasting their (encoded) messages, by learning the importance of each agent's partially observed information. We evaluate SchedNet against multiple baselines under two different applications, namely, cooperative communication and navigation, and predator-prey. Our experiments show a non-negligible performance gap between SchedNet and other mechanisms such as the ones without communication and with vanilla scheduling methods, e.g., round robin, ranging from 32% to 43%.

* Accepted in ICLR 2019 
Viaarxiv icon