Picture for James Queeney

James Queeney

PIETRA: Physics-Informed Evidential Learning for Traversing Out-of-Distribution Terrain

Add code
Sep 04, 2024
Viaarxiv icon

Provably Efficient Off-Policy Adversarial Imitation Learning with Convergence Guarantees

Add code
May 26, 2024
Viaarxiv icon

A Model-Based Approach for Improving Reinforcement Learning Efficiency Leveraging Expert Observations

Add code
Feb 29, 2024
Figure 1 for A Model-Based Approach for Improving Reinforcement Learning Efficiency Leveraging Expert Observations
Figure 2 for A Model-Based Approach for Improving Reinforcement Learning Efficiency Leveraging Expert Observations
Figure 3 for A Model-Based Approach for Improving Reinforcement Learning Efficiency Leveraging Expert Observations
Figure 4 for A Model-Based Approach for Improving Reinforcement Learning Efficiency Leveraging Expert Observations
Viaarxiv icon

Adversarial Imitation Learning from Visual Observations using Latent Information

Add code
Sep 29, 2023
Figure 1 for Adversarial Imitation Learning from Visual Observations using Latent Information
Figure 2 for Adversarial Imitation Learning from Visual Observations using Latent Information
Figure 3 for Adversarial Imitation Learning from Visual Observations using Latent Information
Figure 4 for Adversarial Imitation Learning from Visual Observations using Latent Information
Viaarxiv icon

Optimal Transport Perturbations for Safe Reinforcement Learning with Robustness Guarantees

Add code
Jan 31, 2023
Figure 1 for Optimal Transport Perturbations for Safe Reinforcement Learning with Robustness Guarantees
Figure 2 for Optimal Transport Perturbations for Safe Reinforcement Learning with Robustness Guarantees
Figure 3 for Optimal Transport Perturbations for Safe Reinforcement Learning with Robustness Guarantees
Figure 4 for Optimal Transport Perturbations for Safe Reinforcement Learning with Robustness Guarantees
Viaarxiv icon

Risk-Averse Model Uncertainty for Distributionally Robust Safe Reinforcement Learning

Add code
Jan 30, 2023
Figure 1 for Risk-Averse Model Uncertainty for Distributionally Robust Safe Reinforcement Learning
Figure 2 for Risk-Averse Model Uncertainty for Distributionally Robust Safe Reinforcement Learning
Figure 3 for Risk-Averse Model Uncertainty for Distributionally Robust Safe Reinforcement Learning
Figure 4 for Risk-Averse Model Uncertainty for Distributionally Robust Safe Reinforcement Learning
Viaarxiv icon

Generalized Policy Improvement Algorithms with Theoretically Supported Sample Reuse

Add code
Jun 28, 2022
Figure 1 for Generalized Policy Improvement Algorithms with Theoretically Supported Sample Reuse
Figure 2 for Generalized Policy Improvement Algorithms with Theoretically Supported Sample Reuse
Figure 3 for Generalized Policy Improvement Algorithms with Theoretically Supported Sample Reuse
Figure 4 for Generalized Policy Improvement Algorithms with Theoretically Supported Sample Reuse
Viaarxiv icon

Generalized Proximal Policy Optimization with Sample Reuse

Add code
Oct 29, 2021
Figure 1 for Generalized Proximal Policy Optimization with Sample Reuse
Figure 2 for Generalized Proximal Policy Optimization with Sample Reuse
Figure 3 for Generalized Proximal Policy Optimization with Sample Reuse
Figure 4 for Generalized Proximal Policy Optimization with Sample Reuse
Viaarxiv icon

Uncertainty-Aware Policy Optimization: A Robust, Adaptive Trust Region Approach

Add code
Dec 19, 2020
Figure 1 for Uncertainty-Aware Policy Optimization: A Robust, Adaptive Trust Region Approach
Figure 2 for Uncertainty-Aware Policy Optimization: A Robust, Adaptive Trust Region Approach
Figure 3 for Uncertainty-Aware Policy Optimization: A Robust, Adaptive Trust Region Approach
Figure 4 for Uncertainty-Aware Policy Optimization: A Robust, Adaptive Trust Region Approach
Viaarxiv icon