Alert button
Picture for Matthew E. Taylor

Matthew E. Taylor

Alert button

Agent Modeling as Auxiliary Task for Deep Reinforcement Learning

Add code
Bookmark button
Alert button
Jul 22, 2019
Pablo Hernandez-Leal, Bilal Kartal, Matthew E. Taylor

Figure 1 for Agent Modeling as Auxiliary Task for Deep Reinforcement Learning
Figure 2 for Agent Modeling as Auxiliary Task for Deep Reinforcement Learning
Figure 3 for Agent Modeling as Auxiliary Task for Deep Reinforcement Learning
Figure 4 for Agent Modeling as Auxiliary Task for Deep Reinforcement Learning
Viaarxiv icon

Interactive Learning of Environment Dynamics for Sequential Tasks

Add code
Bookmark button
Alert button
Jul 19, 2019
Robert Loftin, Bei Peng, Matthew E. Taylor, Michael L. Littman, David L. Roberts

Figure 1 for Interactive Learning of Environment Dynamics for Sequential Tasks
Figure 2 for Interactive Learning of Environment Dynamics for Sequential Tasks
Figure 3 for Interactive Learning of Environment Dynamics for Sequential Tasks
Figure 4 for Interactive Learning of Environment Dynamics for Sequential Tasks
Viaarxiv icon

Skynet: A Top Deep RL Agent in the Inaugural Pommerman Team Competition

Add code
Bookmark button
Alert button
Apr 20, 2019
Chao Gao, Pablo Hernandez-Leal, Bilal Kartal, Matthew E. Taylor

Figure 1 for Skynet: A Top Deep RL Agent in the Inaugural Pommerman Team Competition
Figure 2 for Skynet: A Top Deep RL Agent in the Inaugural Pommerman Team Competition
Figure 3 for Skynet: A Top Deep RL Agent in the Inaugural Pommerman Team Competition
Figure 4 for Skynet: A Top Deep RL Agent in the Inaugural Pommerman Team Competition
Viaarxiv icon

Safer Deep RL with Shallow MCTS: A Case Study in Pommerman

Add code
Bookmark button
Alert button
Apr 10, 2019
Bilal Kartal, Pablo Hernandez-Leal, Chao Gao, Matthew E. Taylor

Figure 1 for Safer Deep RL with Shallow MCTS: A Case Study in Pommerman
Figure 2 for Safer Deep RL with Shallow MCTS: A Case Study in Pommerman
Figure 3 for Safer Deep RL with Shallow MCTS: A Case Study in Pommerman
Figure 4 for Safer Deep RL with Shallow MCTS: A Case Study in Pommerman
Viaarxiv icon

Jointly Pre-training with Supervised, Autoencoder, and Value Losses for Deep Reinforcement Learning

Add code
Bookmark button
Alert button
Apr 03, 2019
Gabriel V. de la Cruz Jr., Yunshu Du, Matthew E. Taylor

Figure 1 for Jointly Pre-training with Supervised, Autoencoder, and Value Losses for Deep Reinforcement Learning
Figure 2 for Jointly Pre-training with Supervised, Autoencoder, and Value Losses for Deep Reinforcement Learning
Figure 3 for Jointly Pre-training with Supervised, Autoencoder, and Value Losses for Deep Reinforcement Learning
Figure 4 for Jointly Pre-training with Supervised, Autoencoder, and Value Losses for Deep Reinforcement Learning
Viaarxiv icon

Pre-training with Non-expert Human Demonstration for Deep Reinforcement Learning

Add code
Bookmark button
Alert button
Dec 21, 2018
Gabriel V. de la Cruz, Yunshu Du, Matthew E. Taylor

Figure 1 for Pre-training with Non-expert Human Demonstration for Deep Reinforcement Learning
Figure 2 for Pre-training with Non-expert Human Demonstration for Deep Reinforcement Learning
Figure 3 for Pre-training with Non-expert Human Demonstration for Deep Reinforcement Learning
Figure 4 for Pre-training with Non-expert Human Demonstration for Deep Reinforcement Learning
Viaarxiv icon

Using Monte Carlo Tree Search as a Demonstrator within Asynchronous Deep RL

Add code
Bookmark button
Alert button
Nov 30, 2018
Bilal Kartal, Pablo Hernandez-Leal, Matthew E. Taylor

Figure 1 for Using Monte Carlo Tree Search as a Demonstrator within Asynchronous Deep RL
Figure 2 for Using Monte Carlo Tree Search as a Demonstrator within Asynchronous Deep RL
Figure 3 for Using Monte Carlo Tree Search as a Demonstrator within Asynchronous Deep RL
Figure 4 for Using Monte Carlo Tree Search as a Demonstrator within Asynchronous Deep RL
Viaarxiv icon

Autonomous Extraction of a Hierarchical Structure of Tasks in Reinforcement Learning, A Sequential Associate Rule Mining Approach

Add code
Bookmark button
Alert button
Nov 17, 2018
Behzad Ghazanfari, Fatemeh Afghah, Matthew E. Taylor

Figure 1 for Autonomous Extraction of a Hierarchical Structure of Tasks in Reinforcement Learning, A Sequential Associate Rule Mining Approach
Figure 2 for Autonomous Extraction of a Hierarchical Structure of Tasks in Reinforcement Learning, A Sequential Associate Rule Mining Approach
Figure 3 for Autonomous Extraction of a Hierarchical Structure of Tasks in Reinforcement Learning, A Sequential Associate Rule Mining Approach
Figure 4 for Autonomous Extraction of a Hierarchical Structure of Tasks in Reinforcement Learning, A Sequential Associate Rule Mining Approach
Viaarxiv icon

Is multiagent deep reinforcement learning the answer or the question? A brief survey

Add code
Bookmark button
Alert button
Oct 12, 2018
Pablo Hernandez-Leal, Bilal Kartal, Matthew E. Taylor

Figure 1 for Is multiagent deep reinforcement learning the answer or the question? A brief survey
Figure 2 for Is multiagent deep reinforcement learning the answer or the question? A brief survey
Figure 3 for Is multiagent deep reinforcement learning the answer or the question? A brief survey
Figure 4 for Is multiagent deep reinforcement learning the answer or the question? A brief survey
Viaarxiv icon

Leveraging human knowledge in tabular reinforcement learning: A study of human subjects

Add code
Bookmark button
Alert button
May 15, 2018
Ariel Rosenfeld, Moshe Cohen, Matthew E. Taylor, Sarit Kraus

Figure 1 for Leveraging human knowledge in tabular reinforcement learning: A study of human subjects
Figure 2 for Leveraging human knowledge in tabular reinforcement learning: A study of human subjects
Figure 3 for Leveraging human knowledge in tabular reinforcement learning: A study of human subjects
Figure 4 for Leveraging human knowledge in tabular reinforcement learning: A study of human subjects
Viaarxiv icon