Alert button
Picture for Andrew Bennett

Andrew Bennett

Alert button

Efficient and Sharp Off-Policy Evaluation in Robust Markov Decision Processes

Add code
Bookmark button
Alert button
Mar 29, 2024
Andrew Bennett, Nathan Kallus, Miruna Oprescu, Wen Sun, Kaiwen Wang

Viaarxiv icon

Low-Rank MDPs with Continuous Action Spaces

Add code
Bookmark button
Alert button
Nov 06, 2023
Andrew Bennett, Nathan Kallus, Miruna Oprescu

Viaarxiv icon

Source Condition Double Robust Inference on Functionals of Inverse Problems

Add code
Bookmark button
Alert button
Jul 25, 2023
Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, Masatoshi Uehara

Figure 1 for Source Condition Double Robust Inference on Functionals of Inverse Problems
Figure 2 for Source Condition Double Robust Inference on Functionals of Inverse Problems
Viaarxiv icon

Minimax Instrumental Variable Regression and $L_2$ Convergence Guarantees without Identification or Closedness

Add code
Bookmark button
Alert button
Feb 10, 2023
Andrew Bennett, Nathan Kallus, Xiaojie Mao, Whitney Newey, Vasilis Syrgkanis, Masatoshi Uehara

Figure 1 for Minimax Instrumental Variable Regression and $L_2$ Convergence Guarantees without Identification or Closedness
Viaarxiv icon

Provable Safe Reinforcement Learning with Binary Feedback

Add code
Bookmark button
Alert button
Oct 26, 2022
Andrew Bennett, Dipendra Misra, Nathan Kallus

Figure 1 for Provable Safe Reinforcement Learning with Binary Feedback
Figure 2 for Provable Safe Reinforcement Learning with Binary Feedback
Figure 3 for Provable Safe Reinforcement Learning with Binary Feedback
Figure 4 for Provable Safe Reinforcement Learning with Binary Feedback
Viaarxiv icon

Future-Dependent Value-Based Off-Policy Evaluation in POMDPs

Add code
Bookmark button
Alert button
Jul 26, 2022
Masatoshi Uehara, Haruka Kiyohara, Andrew Bennett, Victor Chernozhukov, Nan Jiang, Nathan Kallus, Chengchun Shi, Wen Sun

Figure 1 for Future-Dependent Value-Based Off-Policy Evaluation in POMDPs
Figure 2 for Future-Dependent Value-Based Off-Policy Evaluation in POMDPs
Figure 3 for Future-Dependent Value-Based Off-Policy Evaluation in POMDPs
Figure 4 for Future-Dependent Value-Based Off-Policy Evaluation in POMDPs
Viaarxiv icon

Proximal Reinforcement Learning: Efficient Off-Policy Evaluation in Partially Observed Markov Decision Processes

Add code
Bookmark button
Alert button
Oct 28, 2021
Andrew Bennett, Nathan Kallus

Figure 1 for Proximal Reinforcement Learning: Efficient Off-Policy Evaluation in Partially Observed Markov Decision Processes
Figure 2 for Proximal Reinforcement Learning: Efficient Off-Policy Evaluation in Partially Observed Markov Decision Processes
Figure 3 for Proximal Reinforcement Learning: Efficient Off-Policy Evaluation in Partially Observed Markov Decision Processes
Figure 4 for Proximal Reinforcement Learning: Efficient Off-Policy Evaluation in Partially Observed Markov Decision Processes
Viaarxiv icon

Have you tried Neural Topic Models? Comparative Analysis of Neural and Non-Neural Topic Models with Application to COVID-19 Twitter Data

Add code
Bookmark button
Alert button
May 21, 2021
Andrew Bennett, Dipendra Misra, Nga Than

Figure 1 for Have you tried Neural Topic Models? Comparative Analysis of Neural and Non-Neural Topic Models with Application to COVID-19 Twitter Data
Figure 2 for Have you tried Neural Topic Models? Comparative Analysis of Neural and Non-Neural Topic Models with Application to COVID-19 Twitter Data
Figure 3 for Have you tried Neural Topic Models? Comparative Analysis of Neural and Non-Neural Topic Models with Application to COVID-19 Twitter Data
Figure 4 for Have you tried Neural Topic Models? Comparative Analysis of Neural and Non-Neural Topic Models with Application to COVID-19 Twitter Data
Viaarxiv icon

The Variational Method of Moments

Add code
Bookmark button
Alert button
Dec 17, 2020
Andrew Bennett, Nathan Kallus

Figure 1 for The Variational Method of Moments
Figure 2 for The Variational Method of Moments
Figure 3 for The Variational Method of Moments
Figure 4 for The Variational Method of Moments
Viaarxiv icon

Off-policy Evaluation in Infinite-Horizon Reinforcement Learning with Latent Confounders

Add code
Bookmark button
Alert button
Jul 27, 2020
Andrew Bennett, Nathan Kallus, Lihong Li, Ali Mousavi

Figure 1 for Off-policy Evaluation in Infinite-Horizon Reinforcement Learning with Latent Confounders
Figure 2 for Off-policy Evaluation in Infinite-Horizon Reinforcement Learning with Latent Confounders
Figure 3 for Off-policy Evaluation in Infinite-Horizon Reinforcement Learning with Latent Confounders
Figure 4 for Off-policy Evaluation in Infinite-Horizon Reinforcement Learning with Latent Confounders
Viaarxiv icon