Alert button
Picture for Amrit Singh Bedi

Amrit Singh Bedi

Alert button

DMCA: Dense Multi-agent Navigation using Attention and Communication

Sep 28, 2022
Senthil Hariharan Arul, Amrit Singh Bedi, Dinesh Manocha

Figure 1 for DMCA: Dense Multi-agent Navigation using Attention and Communication
Figure 2 for DMCA: Dense Multi-agent Navigation using Attention and Communication
Figure 3 for DMCA: Dense Multi-agent Navigation using Attention and Communication
Figure 4 for DMCA: Dense Multi-agent Navigation using Attention and Communication
Viaarxiv icon

Multi Robot Collision Avoidance by Learning Whom to Communicate

Sep 14, 2022
Senthil Hariharan Arul, Amrit Singh Bedi, Dinesh Manocha

Figure 1 for Multi Robot Collision Avoidance by Learning Whom to Communicate
Figure 2 for Multi Robot Collision Avoidance by Learning Whom to Communicate
Figure 3 for Multi Robot Collision Avoidance by Learning Whom to Communicate
Figure 4 for Multi Robot Collision Avoidance by Learning Whom to Communicate
Viaarxiv icon

RTAW: An Attention Inspired Reinforcement Learning Method for Multi-Robot Task Allocation in Warehouse Environments

Sep 13, 2022
Aakriti Agrawal, Amrit Singh Bedi, Dinesh Manocha

Figure 1 for RTAW: An Attention Inspired Reinforcement Learning Method for Multi-Robot Task Allocation in Warehouse Environments
Figure 2 for RTAW: An Attention Inspired Reinforcement Learning Method for Multi-Robot Task Allocation in Warehouse Environments
Figure 3 for RTAW: An Attention Inspired Reinforcement Learning Method for Multi-Robot Task Allocation in Warehouse Environments
Figure 4 for RTAW: An Attention Inspired Reinforcement Learning Method for Multi-Robot Task Allocation in Warehouse Environments
Viaarxiv icon

DC-MRTA: Decentralized Multi-Robot Task Allocation and Navigation in Complex Environments

Sep 07, 2022
Aakriti Agrawal, Senthil Hariharan, Amrit Singh Bedi, Dinesh Manocha

Figure 1 for DC-MRTA: Decentralized Multi-Robot Task Allocation and Navigation in Complex Environments
Figure 2 for DC-MRTA: Decentralized Multi-Robot Task Allocation and Navigation in Complex Environments
Figure 3 for DC-MRTA: Decentralized Multi-Robot Task Allocation and Navigation in Complex Environments
Figure 4 for DC-MRTA: Decentralized Multi-Robot Task Allocation and Navigation in Complex Environments
Viaarxiv icon

HTRON:Efficient Outdoor Navigation with Sparse Rewards via Heavy Tailed Adaptive Reinforce Algorithm

Jul 08, 2022
Kasun Weerakoon, Souradip Chakraborty, Nare Karapetyan, Adarsh Jagan Sathyamoorthy, Amrit Singh Bedi, Dinesh Manocha

Figure 1 for HTRON:Efficient Outdoor Navigation with Sparse Rewards via Heavy Tailed Adaptive Reinforce Algorithm
Figure 2 for HTRON:Efficient Outdoor Navigation with Sparse Rewards via Heavy Tailed Adaptive Reinforce Algorithm
Figure 3 for HTRON:Efficient Outdoor Navigation with Sparse Rewards via Heavy Tailed Adaptive Reinforce Algorithm
Figure 4 for HTRON:Efficient Outdoor Navigation with Sparse Rewards via Heavy Tailed Adaptive Reinforce Algorithm
Viaarxiv icon

FedBC: Calibrating Global and Local Models via Federated Learning Beyond Consensus

Jun 26, 2022
Amrit Singh Bedi, Chen Fan, Alec Koppel, Anit Kumar Sahu, Brian M. Sadler, Furong Huang, Dinesh Manocha

Figure 1 for FedBC: Calibrating Global and Local Models via Federated Learning Beyond Consensus
Figure 2 for FedBC: Calibrating Global and Local Models via Federated Learning Beyond Consensus
Figure 3 for FedBC: Calibrating Global and Local Models via Federated Learning Beyond Consensus
Figure 4 for FedBC: Calibrating Global and Local Models via Federated Learning Beyond Consensus
Viaarxiv icon

$\texttt{FedBC}$: Calibrating Global and Local Models via Federated Learning Beyond Consensus

Jun 22, 2022
Amrit Singh Bedi, Chen Fan, Alec Koppel, Anit Kumar Sahu, Brian M. Sadler, Furong Huang, Dinesh Manocha

Figure 1 for $\texttt{FedBC}$: Calibrating Global and Local Models via Federated Learning Beyond Consensus
Figure 2 for $\texttt{FedBC}$: Calibrating Global and Local Models via Federated Learning Beyond Consensus
Figure 3 for $\texttt{FedBC}$: Calibrating Global and Local Models via Federated Learning Beyond Consensus
Figure 4 for $\texttt{FedBC}$: Calibrating Global and Local Models via Federated Learning Beyond Consensus
Viaarxiv icon

Achieving Zero Constraint Violation for Constrained Reinforcement Learning via Conservative Natural Policy Gradient Primal-Dual Algorithm

Jun 12, 2022
Qinbo Bai, Amrit Singh Bedi, Vaneet Aggarwal

Figure 1 for Achieving Zero Constraint Violation for Constrained Reinforcement Learning via Conservative Natural Policy Gradient Primal-Dual Algorithm
Figure 2 for Achieving Zero Constraint Violation for Constrained Reinforcement Learning via Conservative Natural Policy Gradient Primal-Dual Algorithm
Figure 3 for Achieving Zero Constraint Violation for Constrained Reinforcement Learning via Conservative Natural Policy Gradient Primal-Dual Algorithm
Viaarxiv icon

Dealing with Sparse Rewards in Continuous Control Robotics via Heavy-Tailed Policies

Jun 12, 2022
Souradip Chakraborty, Amrit Singh Bedi, Alec Koppel, Pratap Tokekar, Dinesh Manocha

Figure 1 for Dealing with Sparse Rewards in Continuous Control Robotics via Heavy-Tailed Policies
Figure 2 for Dealing with Sparse Rewards in Continuous Control Robotics via Heavy-Tailed Policies
Figure 3 for Dealing with Sparse Rewards in Continuous Control Robotics via Heavy-Tailed Policies
Figure 4 for Dealing with Sparse Rewards in Continuous Control Robotics via Heavy-Tailed Policies
Viaarxiv icon

Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning

Jun 02, 2022
Souradip Chakraborty, Amrit Singh Bedi, Alec Koppel, Brian M. Sadler, Furong Huang, Pratap Tokekar, Dinesh Manocha

Figure 1 for Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning
Figure 2 for Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning
Figure 3 for Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning
Figure 4 for Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning
Viaarxiv icon