Picture for Pratap Tokekar

Pratap Tokekar

University of Maryland, College Park

Pred-NBV: Prediction-guided Next-Best-View for 3D Object Reconstruction

Add code
Apr 22, 2023
Figure 1 for Pred-NBV: Prediction-guided Next-Best-View for 3D Object Reconstruction
Figure 2 for Pred-NBV: Prediction-guided Next-Best-View for 3D Object Reconstruction
Figure 3 for Pred-NBV: Prediction-guided Next-Best-View for 3D Object Reconstruction
Figure 4 for Pred-NBV: Prediction-guided Next-Best-View for 3D Object Reconstruction
Viaarxiv icon

RE-MOVE: An Adaptive Policy Design Approach for Dynamic Environments via Language-Based Feedback

Add code
Mar 14, 2023
Viaarxiv icon

Data-Driven Distributionally Robust Optimal Control with State-Dependent Noise

Add code
Mar 04, 2023
Viaarxiv icon

Decision-Oriented Learning with Differentiable Submodular Maximization for Vehicle Routing Problem

Add code
Mar 02, 2023
Figure 1 for Decision-Oriented Learning with Differentiable Submodular Maximization for Vehicle Routing Problem
Figure 2 for Decision-Oriented Learning with Differentiable Submodular Maximization for Vehicle Routing Problem
Figure 3 for Decision-Oriented Learning with Differentiable Submodular Maximization for Vehicle Routing Problem
Figure 4 for Decision-Oriented Learning with Differentiable Submodular Maximization for Vehicle Routing Problem
Viaarxiv icon

Dynamically Finding Optimal Observer States to Minimize Localization Error with Complex State-Dependent Noise

Add code
Nov 30, 2022
Figure 1 for Dynamically Finding Optimal Observer States to Minimize Localization Error with Complex State-Dependent Noise
Figure 2 for Dynamically Finding Optimal Observer States to Minimize Localization Error with Complex State-Dependent Noise
Figure 3 for Dynamically Finding Optimal Observer States to Minimize Localization Error with Complex State-Dependent Noise
Figure 4 for Dynamically Finding Optimal Observer States to Minimize Localization Error with Complex State-Dependent Noise
Viaarxiv icon

Interpretable Deep Reinforcement Learning for Green Security Games with Real-Time Information

Add code
Nov 09, 2022
Figure 1 for Interpretable Deep Reinforcement Learning for Green Security Games with Real-Time Information
Figure 2 for Interpretable Deep Reinforcement Learning for Green Security Games with Real-Time Information
Figure 3 for Interpretable Deep Reinforcement Learning for Green Security Games with Real-Time Information
Figure 4 for Interpretable Deep Reinforcement Learning for Green Security Games with Real-Time Information
Viaarxiv icon

Approximation Algorithms for Robot Tours in Random Fields with Guaranteed Estimation Accuracy

Add code
Oct 14, 2022
Figure 1 for Approximation Algorithms for Robot Tours in Random Fields with Guaranteed Estimation Accuracy
Figure 2 for Approximation Algorithms for Robot Tours in Random Fields with Guaranteed Estimation Accuracy
Figure 3 for Approximation Algorithms for Robot Tours in Random Fields with Guaranteed Estimation Accuracy
Figure 4 for Approximation Algorithms for Robot Tours in Random Fields with Guaranteed Estimation Accuracy
Viaarxiv icon

D2CoPlan: A Differentiable Decentralized Planner for Multi-Robot Coverage

Add code
Sep 19, 2022
Figure 1 for D2CoPlan: A Differentiable Decentralized Planner for Multi-Robot Coverage
Figure 2 for D2CoPlan: A Differentiable Decentralized Planner for Multi-Robot Coverage
Figure 3 for D2CoPlan: A Differentiable Decentralized Planner for Multi-Robot Coverage
Figure 4 for D2CoPlan: A Differentiable Decentralized Planner for Multi-Robot Coverage
Viaarxiv icon

Risk-aware Resource Allocation for Multiple UAVs-UGVs Recharging Rendezvous

Add code
Sep 13, 2022
Figure 1 for Risk-aware Resource Allocation for Multiple UAVs-UGVs Recharging Rendezvous
Figure 2 for Risk-aware Resource Allocation for Multiple UAVs-UGVs Recharging Rendezvous
Figure 3 for Risk-aware Resource Allocation for Multiple UAVs-UGVs Recharging Rendezvous
Figure 4 for Risk-aware Resource Allocation for Multiple UAVs-UGVs Recharging Rendezvous
Viaarxiv icon

Dealing with Sparse Rewards in Continuous Control Robotics via Heavy-Tailed Policies

Add code
Jun 12, 2022
Figure 1 for Dealing with Sparse Rewards in Continuous Control Robotics via Heavy-Tailed Policies
Figure 2 for Dealing with Sparse Rewards in Continuous Control Robotics via Heavy-Tailed Policies
Figure 3 for Dealing with Sparse Rewards in Continuous Control Robotics via Heavy-Tailed Policies
Figure 4 for Dealing with Sparse Rewards in Continuous Control Robotics via Heavy-Tailed Policies
Viaarxiv icon