Alert button
Picture for Christopher Amato

Christopher Amato

Alert button

Learning for Multi-robot Cooperation in Partially Observable Stochastic Environments with Macro-actions

Add code
Bookmark button
Alert button
Aug 18, 2017
Miao Liu, Kavinayan Sivakumar, Shayegan Omidshafiei, Christopher Amato, Jonathan P. How

Figure 1 for Learning for Multi-robot Cooperation in Partially Observable Stochastic Environments with Macro-actions
Figure 2 for Learning for Multi-robot Cooperation in Partially Observable Stochastic Environments with Macro-actions
Figure 3 for Learning for Multi-robot Cooperation in Partially Observable Stochastic Environments with Macro-actions
Figure 4 for Learning for Multi-robot Cooperation in Partially Observable Stochastic Environments with Macro-actions
Viaarxiv icon

Deep Decentralized Multi-task Multi-Agent Reinforcement Learning under Partial Observability

Add code
Bookmark button
Alert button
Jul 13, 2017
Shayegan Omidshafiei, Jason Pazis, Christopher Amato, Jonathan P. How, John Vian

Figure 1 for Deep Decentralized Multi-task Multi-Agent Reinforcement Learning under Partial Observability
Figure 2 for Deep Decentralized Multi-task Multi-Agent Reinforcement Learning under Partial Observability
Figure 3 for Deep Decentralized Multi-task Multi-Agent Reinforcement Learning under Partial Observability
Figure 4 for Deep Decentralized Multi-task Multi-Agent Reinforcement Learning under Partial Observability
Viaarxiv icon

Scalable Accelerated Decentralized Multi-Robot Policy Search in Continuous Observation Spaces

Add code
Bookmark button
Alert button
Mar 16, 2017
Shayegan Omidshafiei, Christopher Amato, Miao Liu, Michael Everett, Jonathan P. How, John Vian

Figure 1 for Scalable Accelerated Decentralized Multi-Robot Policy Search in Continuous Observation Spaces
Figure 2 for Scalable Accelerated Decentralized Multi-Robot Policy Search in Continuous Observation Spaces
Figure 3 for Scalable Accelerated Decentralized Multi-Robot Policy Search in Continuous Observation Spaces
Figure 4 for Scalable Accelerated Decentralized Multi-Robot Policy Search in Continuous Observation Spaces
Viaarxiv icon

Semantic-level Decentralized Multi-Robot Decision-Making using Probabilistic Macro-Observations

Add code
Bookmark button
Alert button
Mar 16, 2017
Shayegan Omidshafiei, Shih-Yuan Liu, Michael Everett, Brett T. Lopez, Christopher Amato, Miao Liu, Jonathan P. How, John Vian

Figure 1 for Semantic-level Decentralized Multi-Robot Decision-Making using Probabilistic Macro-Observations
Figure 2 for Semantic-level Decentralized Multi-Robot Decision-Making using Probabilistic Macro-Observations
Figure 3 for Semantic-level Decentralized Multi-Robot Decision-Making using Probabilistic Macro-Observations
Figure 4 for Semantic-level Decentralized Multi-Robot Decision-Making using Probabilistic Macro-Observations
Viaarxiv icon

Stick-Breaking Policy Learning in Dec-POMDPs

Add code
Bookmark button
Alert button
Nov 23, 2015
Miao Liu, Christopher Amato, Xuejun Liao, Lawrence Carin, Jonathan P. How

Figure 1 for Stick-Breaking Policy Learning in Dec-POMDPs
Figure 2 for Stick-Breaking Policy Learning in Dec-POMDPs
Figure 3 for Stick-Breaking Policy Learning in Dec-POMDPs
Figure 4 for Stick-Breaking Policy Learning in Dec-POMDPs
Viaarxiv icon

Decentralized Control of Partially Observable Markov Decision Processes using Belief Space Macro-actions

Add code
Bookmark button
Alert button
Feb 20, 2015
Shayegan Omidshafiei, Ali-akbar Agha-mohammadi, Christopher Amato, Jonathan P. How

Figure 1 for Decentralized Control of Partially Observable Markov Decision Processes using Belief Space Macro-actions
Figure 2 for Decentralized Control of Partially Observable Markov Decision Processes using Belief Space Macro-actions
Figure 3 for Decentralized Control of Partially Observable Markov Decision Processes using Belief Space Macro-actions
Figure 4 for Decentralized Control of Partially Observable Markov Decision Processes using Belief Space Macro-actions
Viaarxiv icon

Scalable Planning and Learning for Multiagent POMDPs: Extended Version

Add code
Bookmark button
Alert button
Dec 20, 2014
Christopher Amato, Frans A. Oliehoek

Figure 1 for Scalable Planning and Learning for Multiagent POMDPs: Extended Version
Figure 2 for Scalable Planning and Learning for Multiagent POMDPs: Extended Version
Figure 3 for Scalable Planning and Learning for Multiagent POMDPs: Extended Version
Figure 4 for Scalable Planning and Learning for Multiagent POMDPs: Extended Version
Viaarxiv icon

Planning for Decentralized Control of Multiple Robots Under Uncertainty

Add code
Bookmark button
Alert button
Feb 12, 2014
Christopher Amato, George D. Konidaris, Gabriel Cruz, Christopher A. Maynor, Jonathan P. How, Leslie P. Kaelbling

Figure 1 for Planning for Decentralized Control of Multiple Robots Under Uncertainty
Figure 2 for Planning for Decentralized Control of Multiple Robots Under Uncertainty
Figure 3 for Planning for Decentralized Control of Multiple Robots Under Uncertainty
Figure 4 for Planning for Decentralized Control of Multiple Robots Under Uncertainty
Viaarxiv icon

Incremental Clustering and Expansion for Faster Optimal Planning in Dec-POMDPs

Add code
Bookmark button
Alert button
Feb 04, 2014
Frans Adriaan Oliehoek, Matthijs T. J. Spaan, Christopher Amato, Shimon Whiteson

Figure 1 for Incremental Clustering and Expansion for Faster Optimal Planning in Dec-POMDPs
Figure 2 for Incremental Clustering and Expansion for Faster Optimal Planning in Dec-POMDPs
Figure 3 for Incremental Clustering and Expansion for Faster Optimal Planning in Dec-POMDPs
Figure 4 for Incremental Clustering and Expansion for Faster Optimal Planning in Dec-POMDPs
Viaarxiv icon

Policy Iteration for Decentralized Control of Markov Decision Processes

Add code
Bookmark button
Alert button
Jan 15, 2014
Daniel S. Bernstein, Christopher Amato, Eric A. Hansen, Shlomo Zilberstein

Figure 1 for Policy Iteration for Decentralized Control of Markov Decision Processes
Figure 2 for Policy Iteration for Decentralized Control of Markov Decision Processes
Figure 3 for Policy Iteration for Decentralized Control of Markov Decision Processes
Figure 4 for Policy Iteration for Decentralized Control of Markov Decision Processes
Viaarxiv icon