Alert button
Picture for Esther Derman

Esther Derman

Alert button

Tree Search-Based Policy Optimization under Stochastic Execution Delay

Add code
Bookmark button
Alert button
Apr 08, 2024
David Valensi, Esther Derman, Shie Mannor, Gal Dalal

Viaarxiv icon

Solving Non-Rectangular Reward-Robust MDPs via Frequency Regularization

Add code
Bookmark button
Alert button
Sep 03, 2023
Uri Gadot, Esther Derman, Navdeep Kumar, Maxence Mohamed Elfatihi, Kfir Levy, Shie Mannor

Figure 1 for Solving Non-Rectangular Reward-Robust MDPs via Frequency Regularization
Figure 2 for Solving Non-Rectangular Reward-Robust MDPs via Frequency Regularization
Figure 3 for Solving Non-Rectangular Reward-Robust MDPs via Frequency Regularization
Figure 4 for Solving Non-Rectangular Reward-Robust MDPs via Frequency Regularization
Viaarxiv icon

Twice Regularized Markov Decision Processes: The Equivalence between Robustness and Regularization

Add code
Bookmark button
Alert button
Mar 12, 2023
Esther Derman, Yevgeniy Men, Matthieu Geist, Shie Mannor

Figure 1 for Twice Regularized Markov Decision Processes: The Equivalence between Robustness and Regularization
Figure 2 for Twice Regularized Markov Decision Processes: The Equivalence between Robustness and Regularization
Figure 3 for Twice Regularized Markov Decision Processes: The Equivalence between Robustness and Regularization
Figure 4 for Twice Regularized Markov Decision Processes: The Equivalence between Robustness and Regularization
Viaarxiv icon

Policy Gradient for s-Rectangular Robust Markov Decision Processes

Add code
Bookmark button
Alert button
Jan 31, 2023
Navdeep Kumar, Esther Derman, Matthieu Geist, Kfir Levy, Shie Mannor

Figure 1 for Policy Gradient for s-Rectangular Robust Markov Decision Processes
Figure 2 for Policy Gradient for s-Rectangular Robust Markov Decision Processes
Figure 3 for Policy Gradient for s-Rectangular Robust Markov Decision Processes
Figure 4 for Policy Gradient for s-Rectangular Robust Markov Decision Processes
Viaarxiv icon

Twice regularized MDPs and the equivalence between robustness and regularization

Add code
Bookmark button
Alert button
Oct 12, 2021
Esther Derman, Matthieu Geist, Shie Mannor

Figure 1 for Twice regularized MDPs and the equivalence between robustness and regularization
Figure 2 for Twice regularized MDPs and the equivalence between robustness and regularization
Figure 3 for Twice regularized MDPs and the equivalence between robustness and regularization
Figure 4 for Twice regularized MDPs and the equivalence between robustness and regularization
Viaarxiv icon

Acting in Delayed Environments with Non-Stationary Markov Policies

Add code
Bookmark button
Alert button
Jan 28, 2021
Esther Derman, Gal Dalal, Shie Mannor

Figure 1 for Acting in Delayed Environments with Non-Stationary Markov Policies
Figure 2 for Acting in Delayed Environments with Non-Stationary Markov Policies
Figure 3 for Acting in Delayed Environments with Non-Stationary Markov Policies
Figure 4 for Acting in Delayed Environments with Non-Stationary Markov Policies
Viaarxiv icon

Distributional Robustness and Regularization in Reinforcement Learning

Add code
Bookmark button
Alert button
Mar 05, 2020
Esther Derman, Shie Mannor

Viaarxiv icon

A Bayesian Approach to Robust Reinforcement Learning

Add code
Bookmark button
Alert button
May 20, 2019
Esther Derman, Daniel Mankowitz, Timothy Mann, Shie Mannor

Figure 1 for A Bayesian Approach to Robust Reinforcement Learning
Figure 2 for A Bayesian Approach to Robust Reinforcement Learning
Figure 3 for A Bayesian Approach to Robust Reinforcement Learning
Figure 4 for A Bayesian Approach to Robust Reinforcement Learning
Viaarxiv icon

Soft-Robust Actor-Critic Policy-Gradient

Add code
Bookmark button
Alert button
Oct 24, 2018
Esther Derman, Daniel J. Mankowitz, Timothy A. Mann, Shie Mannor

Figure 1 for Soft-Robust Actor-Critic Policy-Gradient
Figure 2 for Soft-Robust Actor-Critic Policy-Gradient
Viaarxiv icon