Alert button
Picture for Joseph Lubars

Joseph Lubars

Alert button

The Role of Lookahead and Approximate Policy Evaluation in Policy Iteration with Linear Value Function Approximation

Add code
Bookmark button
Alert button
Sep 28, 2021
Anna Winnicki, Joseph Lubars, Michael Livesay, R. Srikant

Figure 1 for The Role of Lookahead and Approximate Policy Evaluation in Policy Iteration with Linear Value Function Approximation
Figure 2 for The Role of Lookahead and Approximate Policy Evaluation in Policy Iteration with Linear Value Function Approximation
Figure 3 for The Role of Lookahead and Approximate Policy Evaluation in Policy Iteration with Linear Value Function Approximation
Figure 4 for The Role of Lookahead and Approximate Policy Evaluation in Policy Iteration with Linear Value Function Approximation
Viaarxiv icon

Optimistic Policy Iteration for MDPs with Acyclic Transient State Structure

Add code
Bookmark button
Alert button
Feb 13, 2021
Joseph Lubars, Anna Winnicki, Michael Livesay, R. Srikant

Figure 1 for Optimistic Policy Iteration for MDPs with Acyclic Transient State Structure
Figure 2 for Optimistic Policy Iteration for MDPs with Acyclic Transient State Structure
Figure 3 for Optimistic Policy Iteration for MDPs with Acyclic Transient State Structure
Figure 4 for Optimistic Policy Iteration for MDPs with Acyclic Transient State Structure
Viaarxiv icon

Combining Reinforcement Learning with Model Predictive Control for On-Ramp Merging

Add code
Bookmark button
Alert button
Nov 17, 2020
Joseph Lubars, Harsh Gupta, Adnan Raja, R. Srikant, Liyun Li, Xinzhou Wu

Figure 1 for Combining Reinforcement Learning with Model Predictive Control for On-Ramp Merging
Figure 2 for Combining Reinforcement Learning with Model Predictive Control for On-Ramp Merging
Figure 3 for Combining Reinforcement Learning with Model Predictive Control for On-Ramp Merging
Figure 4 for Combining Reinforcement Learning with Model Predictive Control for On-Ramp Merging
Viaarxiv icon