Alert button
Picture for Sarah Keren

Sarah Keren

Alert button

Selectively Sharing Experiences Improves Multi-Agent Reinforcement Learning

Nov 01, 2023
Matthias Gerstgrasser, Tom Danino, Sarah Keren

We present a novel multi-agent RL approach, Selective Multi-Agent Prioritized Experience Relay, in which agents share with other agents a limited number of transitions they observe during training. The intuition behind this is that even a small number of relevant experiences from other agents could help each agent learn. Unlike many other multi-agent RL algorithms, this approach allows for largely decentralized training, requiring only a limited communication channel between agents. We show that our approach outperforms baseline no-sharing decentralized training and state-of-the art multi-agent RL algorithms. Further, sharing only a small number of highly relevant experiences outperforms sharing all experiences between agents, and the performance uplift from selective experience sharing is robust across a range of hyperparameters and DQN variants. A reference implementation of our algorithm is available at https://github.com/mgerstgrasser/super.

* to be published at NeurIPS 2023 
Viaarxiv icon

Value of Assistance for Grasping

Oct 22, 2023
Mohammad Masarwy, Yuval Goshen, David Dovrat, Sarah Keren

In many realistic settings, a robot is tasked with grasping an object without knowing its exact pose. Instead, the robot relies on a probabilistic estimation of the pose to decide how to attempt the grasp. We offer a novel Value of Assistance (VOA) measure for assessing the expected effect a specific observation will have on the robot's ability to successfully complete the grasp. Thus, VOA supports the decision of which sensing action would be most beneficial to the grasping task. We evaluate our suggested measures in both simulated and real-world robotic settings.

Viaarxiv icon

Value of Assistance for Mobile Agents

Aug 23, 2023
Adi Amuzig, David Dovrat, Sarah Keren

Mobile robotic agents often suffer from localization uncertainty which grows with time and with the agents' movement. This can hinder their ability to accomplish their task. In some settings, it may be possible to perform assistive actions that reduce uncertainty about a robot's location. For example, in a collaborative multi-robot system, a wheeled robot can request assistance from a drone that can fly to its estimated location and reveal its exact location on the map or accompany it to its intended location. Since assistance may be costly and limited, and may be requested by different members of a team, there is a need for principled ways to support the decision of which assistance to provide to an agent and when, as well as to decide which agent to help within a team. For this purpose, we propose Value of Assistance (VOA) to represent the expected cost reduction that assistance will yield at a given point of execution. We offer ways to compute VOA based on estimations of the robot's future uncertainty, modeled as a Gaussian process. We specify conditions under which our VOA measures are valid and empirically demonstrate the ability of our measures to predict the agent's average cost reduction when receiving assistance in both simulated and real-world robotic settings.

* IROS 2023. git repository: https://github.com/CLAIR-LAB-TECHNION/VOA 
Viaarxiv icon

Contextual Pre-Planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning

Jul 11, 2023
Guy Azran, Mohamad H. Danesh, Stefano V. Albrecht, Sarah Keren

Figure 1 for Contextual Pre-Planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning
Figure 2 for Contextual Pre-Planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning
Figure 3 for Contextual Pre-Planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning
Figure 4 for Contextual Pre-Planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning

Recent studies show that deep reinforcement learning (DRL) agents tend to overfit to the task on which they were trained and fail to adapt to minor environment changes. To expedite learning when transferring to unseen tasks, we propose a novel approach to representing the current task using reward machines (RM), state machine abstractions that induce subtasks based on the current task's rewards and dynamics. Our method provides agents with symbolic representations of optimal transitions from their current abstract state and rewards them for achieving these transitions. These representations are shared across tasks, allowing agents to exploit knowledge of previously encountered symbols and transitions, thus enhancing transfer. Our empirical evaluation shows that our representations improve sample efficiency and few-shot transfer in a variety of domains.

* IJCAI Workshop on Planning and Reinforcement Learning, 2023 
Viaarxiv icon

Explainable Reinforcement Learning via Model Transforms

Sep 24, 2022
Mira Finkelstein, Lucy Liu, Nitsan Levy Schlot, Yoav Kolumbus, David C. Parkes, Jeffrey S. Rosenshein, Sarah Keren

Figure 1 for Explainable Reinforcement Learning via Model Transforms
Figure 2 for Explainable Reinforcement Learning via Model Transforms
Figure 3 for Explainable Reinforcement Learning via Model Transforms

Understanding emerging behaviors of reinforcement learning (RL) agents may be difficult since such agents are often trained in complex environments using highly complex decision making procedures. This has given rise to a variety of approaches to explainability in RL that aim to reconcile discrepancies that may arise between the behavior of an agent and the behavior that is anticipated by an observer. Most recent approaches have relied either on domain knowledge, that may not always be available, on an analysis of the agent's policy, or on an analysis of specific elements of the underlying environment, typically modeled as a Markov Decision Process (MDP). Our key claim is that even if the underlying MDP is not fully known (e.g., the transition probabilities have not been accurately learned) or is not maintained by the agent (i.e., when using model-free methods), it can nevertheless be exploited to automatically generate explanations. For this purpose, we suggest using formal MDP abstractions and transforms, previously used in the literature for expediting the search for optimal policies, to automatically produce explanations. Since such transforms are typically based on a symbolic representation of the environment, they may represent meaningful explanations for gaps between the anticipated and actual agent behavior. We formally define this problem, suggest a class of transforms that can be used for explaining emergent behaviors, and suggest methods that enable efficient search for an explanation. We demonstrate the approach on a set of standard benchmarks.

* Conference on Neural Information Processing Systems (NeurIPS) 2022 
Viaarxiv icon

Promoting Resilience in Multi-Agent Reinforcement Learning via Confusion-Based Communication

Nov 12, 2021
Ofir Abu, Matthias Gerstgrasser, Jeffrey Rosenschein, Sarah Keren

Figure 1 for Promoting Resilience in Multi-Agent Reinforcement Learning via Confusion-Based Communication
Figure 2 for Promoting Resilience in Multi-Agent Reinforcement Learning via Confusion-Based Communication
Figure 3 for Promoting Resilience in Multi-Agent Reinforcement Learning via Confusion-Based Communication
Figure 4 for Promoting Resilience in Multi-Agent Reinforcement Learning via Confusion-Based Communication

Recent advances in multi-agent reinforcement learning (MARL) provide a variety of tools that support the ability of agents to adapt to unexpected changes in their environment, and to operate successfully given their environment's dynamic nature (which may be intensified by the presence of other agents). In this work, we highlight the relationship between a group's ability to collaborate effectively and the group's resilience, which we measure as the group's ability to adapt to perturbations in the environment. To promote resilience, we suggest facilitating collaboration via a novel confusion-based communication protocol according to which agents broadcast observations that are misaligned with their previous experiences. We allow decisions regarding the width and frequency of messages to be learned autonomously by agents, which are incentivized to reduce confusion. We present empirical evaluation of our approach in a variety of MARL settings.

* Submission from Neurips 2021 WS 
Viaarxiv icon

Designing Environments Conducive to Interpretable Robot Behavior

Jul 02, 2020
Anagha Kulkarni, Sarath Sreedharan, Sarah Keren, Tathagata Chakraborti, David Smith, Subbarao Kambhampati

Figure 1 for Designing Environments Conducive to Interpretable Robot Behavior
Figure 2 for Designing Environments Conducive to Interpretable Robot Behavior
Figure 3 for Designing Environments Conducive to Interpretable Robot Behavior
Figure 4 for Designing Environments Conducive to Interpretable Robot Behavior

Designing robots capable of generating interpretable behavior is a prerequisite for achieving effective human-robot collaboration. This means that the robots need to be capable of generating behavior that aligns with human expectations and, when required, provide explanations to the humans in the loop. However, exhibiting such behavior in arbitrary environments could be quite expensive for robots, and in some cases, the robot may not even be able to exhibit the expected behavior. Given structured environments (like warehouses and restaurants), it may be possible to design the environment so as to boost the interpretability of the robot's behavior or to shape the human's expectations of the robot's behavior. In this paper, we investigate the opportunities and limitations of environment design as a tool to promote a type of interpretable behavior -- known in the literature as explicable behavior. We formulate a novel environment design framework that considers design over multiple tasks and over a time horizon. In addition, we explore the longitudinal aspect of explicable behavior and the trade-off that arises between the cost of design and the cost of generating explicable behavior over a time horizon.

Viaarxiv icon