Alert button
Picture for Yoav Kolumbus

Yoav Kolumbus

Alert button

Explainable Reinforcement Learning via Model Transforms

Sep 24, 2022
Mira Finkelstein, Lucy Liu, Nitsan Levy Schlot, Yoav Kolumbus, David C. Parkes, Jeffrey S. Rosenshein, Sarah Keren

Figure 1 for Explainable Reinforcement Learning via Model Transforms
Figure 2 for Explainable Reinforcement Learning via Model Transforms
Figure 3 for Explainable Reinforcement Learning via Model Transforms

Understanding emerging behaviors of reinforcement learning (RL) agents may be difficult since such agents are often trained in complex environments using highly complex decision making procedures. This has given rise to a variety of approaches to explainability in RL that aim to reconcile discrepancies that may arise between the behavior of an agent and the behavior that is anticipated by an observer. Most recent approaches have relied either on domain knowledge, that may not always be available, on an analysis of the agent's policy, or on an analysis of specific elements of the underlying environment, typically modeled as a Markov Decision Process (MDP). Our key claim is that even if the underlying MDP is not fully known (e.g., the transition probabilities have not been accurately learned) or is not maintained by the agent (i.e., when using model-free methods), it can nevertheless be exploited to automatically generate explanations. For this purpose, we suggest using formal MDP abstractions and transforms, previously used in the literature for expediting the search for optimal policies, to automatically produce explanations. Since such transforms are typically based on a symbolic representation of the environment, they may represent meaningful explanations for gaps between the anticipated and actual agent behavior. We formally define this problem, suggest a class of transforms that can be used for explaining emergent behaviors, and suggest methods that enable efficient search for an explanation. We demonstrate the approach on a set of standard benchmarks.

* Conference on Neural Information Processing Systems (NeurIPS) 2022 
Viaarxiv icon

How and Why to Manipulate Your Own Agent

Dec 14, 2021
Yoav Kolumbus, Noam Nisan

Figure 1 for How and Why to Manipulate Your Own Agent
Figure 2 for How and Why to Manipulate Your Own Agent
Figure 3 for How and Why to Manipulate Your Own Agent

We consider strategic settings where several users engage in a repeated online interaction, assisted by regret-minimizing agents that repeatedly play a "game" on their behalf. We study the dynamics and average outcomes of the repeated game of the agents, and view it as inducing a meta-game between the users. Our main focus is on whether users can benefit in this meta-game from "manipulating" their own agent by mis-reporting their parameters to it. We formally define this "user-agent meta-game" model for general games, discuss its properties under different notions of convergence of the dynamics of the automated agents and analyze the equilibria induced on the users in 2x2 games in which the dynamics converge to a single equilibrium.

Viaarxiv icon

Auctions Between Regret-Minimizing Agents

Oct 22, 2021
Yoav Kolumbus, Noam Nisan

Figure 1 for Auctions Between Regret-Minimizing Agents
Figure 2 for Auctions Between Regret-Minimizing Agents
Figure 3 for Auctions Between Regret-Minimizing Agents
Figure 4 for Auctions Between Regret-Minimizing Agents

We analyze a scenario in which software agents implemented as regret minimizing algorithms engage in a repeated auction on behalf of their users. We study first price and second price auctions, as well as their generalized versions (e.g., as those used for ad auctions). Using both theoretical analysis and simulations, we show that, surprisingly, in second price auctions the players have incentives to mis-report their true valuations to their own learning agents, while in the first price auction it is a dominant strategy for all players to truthfully report their valuations to their agents.

Viaarxiv icon

Neural Networks for Predicting Human Interactions in Repeated Games

Nov 08, 2019
Yoav Kolumbus, Gali Noti

Figure 1 for Neural Networks for Predicting Human Interactions in Repeated Games
Figure 2 for Neural Networks for Predicting Human Interactions in Repeated Games
Figure 3 for Neural Networks for Predicting Human Interactions in Repeated Games

We consider the problem of predicting human players' actions in repeated strategic interactions. Our goal is to predict the dynamic step-by-step behavior of individual players in previously unseen games. We study the ability of neural networks to perform such predictions and the information that they require. We show on a dataset of normal-form games from experiments with human participants that standard neural networks are able to learn functions that provide more accurate predictions of the players' actions than established models from behavioral economics. The networks outperform the other models in terms of prediction accuracy and cross-entropy, and yield higher economic value. We show that if the available input is only of a short sequence of play, economic information about the game is important for predicting behavior of human agents. However, interestingly, we find that when the networks are trained with long enough sequences of history of play, action-based networks do well and additional economic details about the game do not improve their performance, indicating that the sequence of actions encode sufficient information for the success in the prediction task.

* Published in: Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI'19), AAAI Press, 2019, Pages 392-399  
Viaarxiv icon

Behavior-Based Machine-Learning: A Hybrid Approach for Predicting Human Decision Making

Nov 30, 2016
Gali Noti, Effi Levi, Yoav Kolumbus, Amit Daniely

A large body of work in behavioral fields attempts to develop models that describe the way people, as opposed to rational agents, make decisions. A recent Choice Prediction Competition (2015) challenged researchers to suggest a model that captures 14 classic choice biases and can predict human decisions under risk and ambiguity. The competition focused on simple decision problems, in which human subjects were asked to repeatedly choose between two gamble options. In this paper we present our approach for predicting human decision behavior: we suggest to use machine learning algorithms with features that are based on well-established behavioral theories. The basic idea is that these psychological features are essential for the representation of the data and are important for the success of the learning process. We implement a vanilla model in which we train SVM models using behavioral features that rely on the psychological properties underlying the competition baseline model. We show that this basic model captures the 14 choice biases and outperforms all the other learning-based models in the competition. The preliminary results suggest that such hybrid models can significantly improve the prediction of human decision making, and are a promising direction for future research.

Viaarxiv icon