Alert button
Picture for Nathan Kallus

Nathan Kallus

Alert button

Efficient Evaluation of Natural Stochastic Policies in Offline Reinforcement Learning

Add code
Bookmark button
Alert button
Jun 06, 2020
Nathan Kallus, Masatoshi Uehara

Figure 1 for Efficient Evaluation of Natural Stochastic Policies in Offline Reinforcement Learning
Figure 2 for Efficient Evaluation of Natural Stochastic Policies in Offline Reinforcement Learning
Viaarxiv icon

DTR Bandit: Learning to Make Response-Adaptive Decisions With Low Regret

Add code
Bookmark button
Alert button
Jun 05, 2020
Yichun Hu, Nathan Kallus

Figure 1 for DTR Bandit: Learning to Make Response-Adaptive Decisions With Low Regret
Figure 2 for DTR Bandit: Learning to Make Response-Adaptive Decisions With Low Regret
Figure 3 for DTR Bandit: Learning to Make Response-Adaptive Decisions With Low Regret
Figure 4 for DTR Bandit: Learning to Make Response-Adaptive Decisions With Low Regret
Viaarxiv icon

On the Optimality of Randomization in Experimental Design: How to Randomize for Minimax Variance and Design-Based Inference

Add code
Bookmark button
Alert button
May 06, 2020
Nathan Kallus

Viaarxiv icon

Comment: Entropy Learning for Dynamic Treatment Regimes

Add code
Bookmark button
Alert button
Apr 06, 2020
Nathan Kallus

Figure 1 for Comment: Entropy Learning for Dynamic Treatment Regimes
Viaarxiv icon

On the role of surrogates in the efficient estimation of treatment effects with limited outcome data

Add code
Bookmark button
Alert button
Mar 27, 2020
Nathan Kallus, Xiaojie Mao

Figure 1 for On the role of surrogates in the efficient estimation of treatment effects with limited outcome data
Figure 2 for On the role of surrogates in the efficient estimation of treatment effects with limited outcome data
Figure 3 for On the role of surrogates in the efficient estimation of treatment effects with limited outcome data
Viaarxiv icon

Statistically Efficient Off-Policy Policy Gradients

Add code
Bookmark button
Alert button
Feb 20, 2020
Nathan Kallus, Masatoshi Uehara

Figure 1 for Statistically Efficient Off-Policy Policy Gradients
Figure 2 for Statistically Efficient Off-Policy Policy Gradients
Figure 3 for Statistically Efficient Off-Policy Policy Gradients
Viaarxiv icon

Efficient Policy Learning from Surrogate-Loss Classification Reductions

Add code
Bookmark button
Alert button
Feb 12, 2020
Andrew Bennett, Nathan Kallus

Figure 1 for Efficient Policy Learning from Surrogate-Loss Classification Reductions
Figure 2 for Efficient Policy Learning from Surrogate-Loss Classification Reductions
Figure 3 for Efficient Policy Learning from Surrogate-Loss Classification Reductions
Figure 4 for Efficient Policy Learning from Surrogate-Loss Classification Reductions
Viaarxiv icon

Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning

Add code
Bookmark button
Alert button
Feb 11, 2020
Nathan Kallus, Angela Zhou

Figure 1 for Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning
Figure 2 for Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning
Figure 3 for Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning
Figure 4 for Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning
Viaarxiv icon