Alert button
Picture for Philip Thomas

Philip Thomas

Alert button

Driver Profiling and Bayesian Workload Estimation Using Naturalistic Peripheral Detection Study Data

Add code
Bookmark button
Alert button
Mar 26, 2023
Nermin Caber, Jiaming Liang, Bashar I. Ahmad, Simon Godsill, Alexandra Bremers, Philip Thomas, David Oxtoby, Lee Skrypchuk

Figure 1 for Driver Profiling and Bayesian Workload Estimation Using Naturalistic Peripheral Detection Study Data
Figure 2 for Driver Profiling and Bayesian Workload Estimation Using Naturalistic Peripheral Detection Study Data
Figure 3 for Driver Profiling and Bayesian Workload Estimation Using Naturalistic Peripheral Detection Study Data
Figure 4 for Driver Profiling and Bayesian Workload Estimation Using Naturalistic Peripheral Detection Study Data
Viaarxiv icon

Asymptotically Unbiased Off-Policy Policy Evaluation when Reusing Old Data in Nonstationary Environments

Add code
Bookmark button
Alert button
Feb 23, 2023
Vincent Liu, Yash Chandak, Philip Thomas, Martha White

Figure 1 for Asymptotically Unbiased Off-Policy Policy Evaluation when Reusing Old Data in Nonstationary Environments
Figure 2 for Asymptotically Unbiased Off-Policy Policy Evaluation when Reusing Old Data in Nonstationary Environments
Figure 3 for Asymptotically Unbiased Off-Policy Policy Evaluation when Reusing Old Data in Nonstationary Environments
Figure 4 for Asymptotically Unbiased Off-Policy Policy Evaluation when Reusing Old Data in Nonstationary Environments
Viaarxiv icon

Low Variance Off-policy Evaluation with State-based Importance Sampling

Add code
Bookmark button
Alert button
Dec 21, 2022
David M. Bossens, Philip Thomas

Figure 1 for Low Variance Off-policy Evaluation with State-based Importance Sampling
Figure 2 for Low Variance Off-policy Evaluation with State-based Importance Sampling
Figure 3 for Low Variance Off-policy Evaluation with State-based Importance Sampling
Figure 4 for Low Variance Off-policy Evaluation with State-based Importance Sampling
Viaarxiv icon

Proximal Reinforcement Learning: A New Theory of Sequential Decision Making in Primal-Dual Spaces

Add code
Bookmark button
Alert button
May 26, 2014
Sridhar Mahadevan, Bo Liu, Philip Thomas, Will Dabney, Steve Giguere, Nicholas Jacek, Ian Gemp, Ji Liu

Figure 1 for Proximal Reinforcement Learning: A New Theory of Sequential Decision Making in Primal-Dual Spaces
Figure 2 for Proximal Reinforcement Learning: A New Theory of Sequential Decision Making in Primal-Dual Spaces
Figure 3 for Proximal Reinforcement Learning: A New Theory of Sequential Decision Making in Primal-Dual Spaces
Figure 4 for Proximal Reinforcement Learning: A New Theory of Sequential Decision Making in Primal-Dual Spaces
Viaarxiv icon