Alert button
Picture for Shie Mannor

Shie Mannor

Alert button

Implicitly Normalized Explicitly Regularized Density Estimation

Jul 25, 2023
Mark Kozdoba, Binyamin Perets, Shie Mannor

Figure 1 for Implicitly Normalized Explicitly Regularized Density Estimation
Figure 2 for Implicitly Normalized Explicitly Regularized Density Estimation
Figure 3 for Implicitly Normalized Explicitly Regularized Density Estimation
Viaarxiv icon

Individualized Dosing Dynamics via Neural Eigen Decomposition

Jun 24, 2023
Stav Belogolovsky, Ido Greenberg, Danny Eytan, Shie Mannor

Viaarxiv icon

Robust Reinforcement Learning via Adversarial Kernel Approximation

Jun 09, 2023
Kaixin Wang, Uri Gadot, Navdeep Kumar, Kfir Levy, Shie Mannor

Figure 1 for Robust Reinforcement Learning via Adversarial Kernel Approximation
Figure 2 for Robust Reinforcement Learning via Adversarial Kernel Approximation
Figure 3 for Robust Reinforcement Learning via Adversarial Kernel Approximation
Figure 4 for Robust Reinforcement Learning via Adversarial Kernel Approximation
Viaarxiv icon

Representation-Driven Reinforcement Learning

May 31, 2023
Ofir Nabati, Guy Tennenholtz, Shie Mannor

Figure 1 for Representation-Driven Reinforcement Learning
Figure 2 for Representation-Driven Reinforcement Learning
Figure 3 for Representation-Driven Reinforcement Learning
Figure 4 for Representation-Driven Reinforcement Learning
Viaarxiv icon

CALM: Conditional Adversarial Latent Models for Directable Virtual Characters

May 02, 2023
Chen Tessler, Yoni Kasten, Yunrong Guo, Shie Mannor, Gal Chechik, Xue Bin Peng

Figure 1 for CALM: Conditional Adversarial Latent Models for Directable Virtual Characters
Figure 2 for CALM: Conditional Adversarial Latent Models for Directable Virtual Characters
Figure 3 for CALM: Conditional Adversarial Latent Models for Directable Virtual Characters
Figure 4 for CALM: Conditional Adversarial Latent Models for Directable Virtual Characters
Viaarxiv icon

Twice Regularized Markov Decision Processes: The Equivalence between Robustness and Regularization

Mar 12, 2023
Esther Derman, Yevgeniy Men, Matthieu Geist, Shie Mannor

Figure 1 for Twice Regularized Markov Decision Processes: The Equivalence between Robustness and Regularization
Figure 2 for Twice Regularized Markov Decision Processes: The Equivalence between Robustness and Regularization
Figure 3 for Twice Regularized Markov Decision Processes: The Equivalence between Robustness and Regularization
Figure 4 for Twice Regularized Markov Decision Processes: The Equivalence between Robustness and Regularization
Viaarxiv icon

An Efficient Solution to s-Rectangular Robust Markov Decision Processes

Jan 31, 2023
Navdeep Kumar, Kfir Levy, Kaixin Wang, Shie Mannor

Figure 1 for An Efficient Solution to s-Rectangular Robust Markov Decision Processes
Figure 2 for An Efficient Solution to s-Rectangular Robust Markov Decision Processes
Figure 3 for An Efficient Solution to s-Rectangular Robust Markov Decision Processes
Figure 4 for An Efficient Solution to s-Rectangular Robust Markov Decision Processes
Viaarxiv icon

Policy Gradient for s-Rectangular Robust Markov Decision Processes

Jan 31, 2023
Navdeep Kumar, Esther Derman, Matthieu Geist, Kfir Levy, Shie Mannor

Figure 1 for Policy Gradient for s-Rectangular Robust Markov Decision Processes
Figure 2 for Policy Gradient for s-Rectangular Robust Markov Decision Processes
Figure 3 for Policy Gradient for s-Rectangular Robust Markov Decision Processes
Figure 4 for Policy Gradient for s-Rectangular Robust Markov Decision Processes
Viaarxiv icon

SoftTreeMax: Exponential Variance Reduction in Policy Gradient via Tree Search

Jan 30, 2023
Gal Dalal, Assaf Hallak, Gugan Thoppe, Shie Mannor, Gal Chechik

Figure 1 for SoftTreeMax: Exponential Variance Reduction in Policy Gradient via Tree Search
Figure 2 for SoftTreeMax: Exponential Variance Reduction in Policy Gradient via Tree Search
Figure 3 for SoftTreeMax: Exponential Variance Reduction in Policy Gradient via Tree Search
Figure 4 for SoftTreeMax: Exponential Variance Reduction in Policy Gradient via Tree Search
Viaarxiv icon

Train Hard, Fight Easy: Robust Meta Reinforcement Learning

Jan 26, 2023
Ido Greenberg, Shie Mannor, Gal Chechik, Eli Meirom

Figure 1 for Train Hard, Fight Easy: Robust Meta Reinforcement Learning
Figure 2 for Train Hard, Fight Easy: Robust Meta Reinforcement Learning
Figure 3 for Train Hard, Fight Easy: Robust Meta Reinforcement Learning
Figure 4 for Train Hard, Fight Easy: Robust Meta Reinforcement Learning
Viaarxiv icon