Alert button
Picture for Pratap Tokekar

Pratap Tokekar

Alert button

RE-MOVE: An Adaptive Policy Design Approach for Dynamic Environments via Language-Based Feedback

Add code
Bookmark button
Alert button
Mar 14, 2023
Souradip Chakraborty, Kasun Weerakoon, Prithvi Poddar, Pratap Tokekar, Amrit Singh Bedi, Dinesh Manocha

Figure 1 for RE-MOVE: An Adaptive Policy Design Approach for Dynamic Environments via Language-Based Feedback
Figure 2 for RE-MOVE: An Adaptive Policy Design Approach for Dynamic Environments via Language-Based Feedback
Figure 3 for RE-MOVE: An Adaptive Policy Design Approach for Dynamic Environments via Language-Based Feedback
Figure 4 for RE-MOVE: An Adaptive Policy Design Approach for Dynamic Environments via Language-Based Feedback
Viaarxiv icon

Data-Driven Distributionally Robust Optimal Control with State-Dependent Noise

Add code
Bookmark button
Alert button
Mar 04, 2023
Rui Liu, Guangyao Shi, Pratap Tokekar

Figure 1 for Data-Driven Distributionally Robust Optimal Control with State-Dependent Noise
Figure 2 for Data-Driven Distributionally Robust Optimal Control with State-Dependent Noise
Figure 3 for Data-Driven Distributionally Robust Optimal Control with State-Dependent Noise
Figure 4 for Data-Driven Distributionally Robust Optimal Control with State-Dependent Noise
Viaarxiv icon

Decision-Oriented Learning with Differentiable Submodular Maximization for Vehicle Routing Problem

Add code
Bookmark button
Alert button
Mar 02, 2023
Guangyao Shi, Pratap Tokekar

Figure 1 for Decision-Oriented Learning with Differentiable Submodular Maximization for Vehicle Routing Problem
Figure 2 for Decision-Oriented Learning with Differentiable Submodular Maximization for Vehicle Routing Problem
Figure 3 for Decision-Oriented Learning with Differentiable Submodular Maximization for Vehicle Routing Problem
Figure 4 for Decision-Oriented Learning with Differentiable Submodular Maximization for Vehicle Routing Problem
Viaarxiv icon

Dynamically Finding Optimal Observer States to Minimize Localization Error with Complex State-Dependent Noise

Add code
Bookmark button
Alert button
Nov 30, 2022
Troi Williams, Po-Lun Chen, Sparsh Bhogavilli, Vaibhav Sanjay, Pratap Tokekar

Figure 1 for Dynamically Finding Optimal Observer States to Minimize Localization Error with Complex State-Dependent Noise
Figure 2 for Dynamically Finding Optimal Observer States to Minimize Localization Error with Complex State-Dependent Noise
Figure 3 for Dynamically Finding Optimal Observer States to Minimize Localization Error with Complex State-Dependent Noise
Figure 4 for Dynamically Finding Optimal Observer States to Minimize Localization Error with Complex State-Dependent Noise
Viaarxiv icon

Interpretable Deep Reinforcement Learning for Green Security Games with Real-Time Information

Add code
Bookmark button
Alert button
Nov 09, 2022
Vishnu Dutt Sharma, John P. Dickerson, Pratap Tokekar

Figure 1 for Interpretable Deep Reinforcement Learning for Green Security Games with Real-Time Information
Figure 2 for Interpretable Deep Reinforcement Learning for Green Security Games with Real-Time Information
Figure 3 for Interpretable Deep Reinforcement Learning for Green Security Games with Real-Time Information
Figure 4 for Interpretable Deep Reinforcement Learning for Green Security Games with Real-Time Information
Viaarxiv icon

Approximation Algorithms for Robot Tours in Random Fields with Guaranteed Estimation Accuracy

Add code
Bookmark button
Alert button
Oct 14, 2022
Shamak Dutta, Nils Wilde, Pratap Tokekar, Stephen L. Smith

Figure 1 for Approximation Algorithms for Robot Tours in Random Fields with Guaranteed Estimation Accuracy
Figure 2 for Approximation Algorithms for Robot Tours in Random Fields with Guaranteed Estimation Accuracy
Figure 3 for Approximation Algorithms for Robot Tours in Random Fields with Guaranteed Estimation Accuracy
Figure 4 for Approximation Algorithms for Robot Tours in Random Fields with Guaranteed Estimation Accuracy
Viaarxiv icon

D2CoPlan: A Differentiable Decentralized Planner for Multi-Robot Coverage

Add code
Bookmark button
Alert button
Sep 19, 2022
Vishnu Dutt Sharma, Lifeng Zhou, Pratap Tokekar

Figure 1 for D2CoPlan: A Differentiable Decentralized Planner for Multi-Robot Coverage
Figure 2 for D2CoPlan: A Differentiable Decentralized Planner for Multi-Robot Coverage
Figure 3 for D2CoPlan: A Differentiable Decentralized Planner for Multi-Robot Coverage
Figure 4 for D2CoPlan: A Differentiable Decentralized Planner for Multi-Robot Coverage
Viaarxiv icon

Risk-aware Resource Allocation for Multiple UAVs-UGVs Recharging Rendezvous

Add code
Bookmark button
Alert button
Sep 13, 2022
Ahmad Bilal Asghar, Guangyao Shi, Nare Karapetyan, James Humann, Jean-Paul Reddinger, James Dotterweich, Pratap Tokekar

Figure 1 for Risk-aware Resource Allocation for Multiple UAVs-UGVs Recharging Rendezvous
Figure 2 for Risk-aware Resource Allocation for Multiple UAVs-UGVs Recharging Rendezvous
Figure 3 for Risk-aware Resource Allocation for Multiple UAVs-UGVs Recharging Rendezvous
Figure 4 for Risk-aware Resource Allocation for Multiple UAVs-UGVs Recharging Rendezvous
Viaarxiv icon

Dealing with Sparse Rewards in Continuous Control Robotics via Heavy-Tailed Policies

Add code
Bookmark button
Alert button
Jun 12, 2022
Souradip Chakraborty, Amrit Singh Bedi, Alec Koppel, Pratap Tokekar, Dinesh Manocha

Figure 1 for Dealing with Sparse Rewards in Continuous Control Robotics via Heavy-Tailed Policies
Figure 2 for Dealing with Sparse Rewards in Continuous Control Robotics via Heavy-Tailed Policies
Figure 3 for Dealing with Sparse Rewards in Continuous Control Robotics via Heavy-Tailed Policies
Figure 4 for Dealing with Sparse Rewards in Continuous Control Robotics via Heavy-Tailed Policies
Viaarxiv icon

Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning

Add code
Bookmark button
Alert button
Jun 02, 2022
Souradip Chakraborty, Amrit Singh Bedi, Alec Koppel, Brian M. Sadler, Furong Huang, Pratap Tokekar, Dinesh Manocha

Figure 1 for Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning
Figure 2 for Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning
Figure 3 for Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning
Figure 4 for Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning
Viaarxiv icon