Picture for Prashant Doshi

Prashant Doshi

University of Georgia

MVSA-Net: Multi-View State-Action Recognition for Robust and Deployable Trajectory Generation

Nov 18, 2023
Viaarxiv icon

A Novel Variational Lower Bound for Inverse Reinforcement Learning

Add code
Nov 10, 2023
Figure 1 for A Novel Variational Lower Bound for Inverse Reinforcement Learning
Figure 2 for A Novel Variational Lower Bound for Inverse Reinforcement Learning
Figure 3 for A Novel Variational Lower Bound for Inverse Reinforcement Learning
Figure 4 for A Novel Variational Lower Bound for Inverse Reinforcement Learning
Viaarxiv icon

Latent Interactive A2C for Improved RL in Open Many-Agent Systems

May 09, 2023
Figure 1 for Latent Interactive A2C for Improved RL in Open Many-Agent Systems
Figure 2 for Latent Interactive A2C for Improved RL in Open Many-Agent Systems
Figure 3 for Latent Interactive A2C for Improved RL in Open Many-Agent Systems
Figure 4 for Latent Interactive A2C for Improved RL in Open Many-Agent Systems
Viaarxiv icon

IRL with Partial Observations using the Principle of Uncertain Maximum Entropy

Aug 15, 2022
Figure 1 for IRL with Partial Observations using the Principle of Uncertain Maximum Entropy
Figure 2 for IRL with Partial Observations using the Principle of Uncertain Maximum Entropy
Figure 3 for IRL with Partial Observations using the Principle of Uncertain Maximum Entropy
Figure 4 for IRL with Partial Observations using the Principle of Uncertain Maximum Entropy
Viaarxiv icon

Marginal MAP Estimation for Inverse RL under Occlusion with Observer Noise

Sep 16, 2021
Figure 1 for Marginal MAP Estimation for Inverse RL under Occlusion with Observer Noise
Figure 2 for Marginal MAP Estimation for Inverse RL under Occlusion with Observer Noise
Figure 3 for Marginal MAP Estimation for Inverse RL under Occlusion with Observer Noise
Figure 4 for Marginal MAP Estimation for Inverse RL under Occlusion with Observer Noise
Viaarxiv icon

A Hierarchical Bayesian model for Inverse RL in Partially-Controlled Environments

Jul 13, 2021
Figure 1 for A Hierarchical Bayesian model for Inverse RL in Partially-Controlled Environments
Figure 2 for A Hierarchical Bayesian model for Inverse RL in Partially-Controlled Environments
Figure 3 for A Hierarchical Bayesian model for Inverse RL in Partially-Controlled Environments
Figure 4 for A Hierarchical Bayesian model for Inverse RL in Partially-Controlled Environments
Viaarxiv icon

Many Agent Reinforcement Learning Under Partial Observability

Jun 17, 2021
Figure 1 for Many Agent Reinforcement Learning Under Partial Observability
Figure 2 for Many Agent Reinforcement Learning Under Partial Observability
Figure 3 for Many Agent Reinforcement Learning Under Partial Observability
Figure 4 for Many Agent Reinforcement Learning Under Partial Observability
Viaarxiv icon

Cooperative-Competitive Reinforcement Learning with History-Dependent Rewards

Oct 15, 2020
Figure 1 for Cooperative-Competitive Reinforcement Learning with History-Dependent Rewards
Figure 2 for Cooperative-Competitive Reinforcement Learning with History-Dependent Rewards
Figure 3 for Cooperative-Competitive Reinforcement Learning with History-Dependent Rewards
Figure 4 for Cooperative-Competitive Reinforcement Learning with History-Dependent Rewards
Viaarxiv icon

Recurrent Sum-Product-Max Networks for Decision Making in Perfectly-Observed Environments

Add code
Jun 12, 2020
Figure 1 for Recurrent Sum-Product-Max Networks for Decision Making in Perfectly-Observed Environments
Figure 2 for Recurrent Sum-Product-Max Networks for Decision Making in Perfectly-Observed Environments
Figure 3 for Recurrent Sum-Product-Max Networks for Decision Making in Perfectly-Observed Environments
Figure 4 for Recurrent Sum-Product-Max Networks for Decision Making in Perfectly-Observed Environments
Viaarxiv icon

Maximum Entropy Multi-Task Inverse RL

Add code
Apr 27, 2020
Figure 1 for Maximum Entropy Multi-Task Inverse RL
Figure 2 for Maximum Entropy Multi-Task Inverse RL
Figure 3 for Maximum Entropy Multi-Task Inverse RL
Viaarxiv icon