Alert button
Picture for Matthew E. Taylor

Matthew E. Taylor

Alert button

The Effect of Q-function Reuse on the Total Regret of Tabular, Model-Free, Reinforcement Learning

Add code
Bookmark button
Alert button
Mar 07, 2021
Volodymyr Tkachuk, Sriram Ganapathi Subramanian, Matthew E. Taylor

Figure 1 for The Effect of Q-function Reuse on the Total Regret of Tabular, Model-Free, Reinforcement Learning
Figure 2 for The Effect of Q-function Reuse on the Total Regret of Tabular, Model-Free, Reinforcement Learning
Viaarxiv icon

Model-Invariant State Abstractions for Model-Based Reinforcement Learning

Add code
Bookmark button
Alert button
Feb 19, 2021
Manan Tomar, Amy Zhang, Roberto Calandra, Matthew E. Taylor, Joelle Pineau

Figure 1 for Model-Invariant State Abstractions for Model-Based Reinforcement Learning
Figure 2 for Model-Invariant State Abstractions for Model-Based Reinforcement Learning
Figure 3 for Model-Invariant State Abstractions for Model-Based Reinforcement Learning
Figure 4 for Model-Invariant State Abstractions for Model-Based Reinforcement Learning
Viaarxiv icon

Diverse Auto-Curriculum is Critical for Successful Real-World Multiagent Learning Systems

Add code
Bookmark button
Alert button
Feb 16, 2021
Yaodong Yang, Jun Luo, Ying Wen, Oliver Slumbers, Daniel Graves, Haitham Bou Ammar, Jun Wang, Matthew E. Taylor

Viaarxiv icon

Improving Reinforcement Learning with Human Assistance: An Argument for Human Subject Studies with HIPPO Gym

Add code
Bookmark button
Alert button
Feb 02, 2021
Matthew E. Taylor, Nicholas Nissen, Yuan Wang, Neda Navidi

Figure 1 for Improving Reinforcement Learning with Human Assistance: An Argument for Human Subject Studies with HIPPO Gym
Figure 2 for Improving Reinforcement Learning with Human Assistance: An Argument for Human Subject Studies with HIPPO Gym
Figure 3 for Improving Reinforcement Learning with Human Assistance: An Argument for Human Subject Studies with HIPPO Gym
Figure 4 for Improving Reinforcement Learning with Human Assistance: An Argument for Human Subject Studies with HIPPO Gym
Viaarxiv icon

HAMMER: Multi-Level Coordination of Reinforcement Learning Agents via Learned Messaging

Add code
Bookmark button
Alert button
Jan 18, 2021
Nikunj Gupta, G Srinivasaraghavan, Swarup Kumar Mohalik, Matthew E. Taylor

Figure 1 for HAMMER: Multi-Level Coordination of Reinforcement Learning Agents via Learned Messaging
Figure 2 for HAMMER: Multi-Level Coordination of Reinforcement Learning Agents via Learned Messaging
Figure 3 for HAMMER: Multi-Level Coordination of Reinforcement Learning Agents via Learned Messaging
Figure 4 for HAMMER: Multi-Level Coordination of Reinforcement Learning Agents via Learned Messaging
Viaarxiv icon

Useful Policy Invariant Shaping from Arbitrary Advice

Add code
Bookmark button
Alert button
Nov 02, 2020
Paniz Behboudian, Yash Satsangi, Matthew E. Taylor, Anna Harutyunyan, Michael Bowling

Figure 1 for Useful Policy Invariant Shaping from Arbitrary Advice
Figure 2 for Useful Policy Invariant Shaping from Arbitrary Advice
Figure 3 for Useful Policy Invariant Shaping from Arbitrary Advice
Figure 4 for Useful Policy Invariant Shaping from Arbitrary Advice
Viaarxiv icon

Maximum Reward Formulation In Reinforcement Learning

Add code
Bookmark button
Alert button
Oct 08, 2020
Sai Krishna Gottipati, Yashaswi Pathak, Rohan Nuttall, Sahir, Raviteja Chunduru, Ahmed Touati, Sriram Ganapathi Subramanian, Matthew E. Taylor, Sarath Chandar

Figure 1 for Maximum Reward Formulation In Reinforcement Learning
Figure 2 for Maximum Reward Formulation In Reinforcement Learning
Figure 3 for Maximum Reward Formulation In Reinforcement Learning
Figure 4 for Maximum Reward Formulation In Reinforcement Learning
Viaarxiv icon

Lucid Dreaming for Experience Replay: Refreshing Past States with the Current Policy

Add code
Bookmark button
Alert button
Sep 29, 2020
Yunshu Du, Garrett Warnell, Assefaw Gebremedhin, Peter Stone, Matthew E. Taylor

Figure 1 for Lucid Dreaming for Experience Replay: Refreshing Past States with the Current Policy
Figure 2 for Lucid Dreaming for Experience Replay: Refreshing Past States with the Current Policy
Figure 3 for Lucid Dreaming for Experience Replay: Refreshing Past States with the Current Policy
Figure 4 for Lucid Dreaming for Experience Replay: Refreshing Past States with the Current Policy
Viaarxiv icon

A Conceptual Framework for Externally-influenced Agents: An Assisted Reinforcement Learning Review

Add code
Bookmark button
Alert button
Jul 03, 2020
Adam Bignold, Francisco Cruz, Matthew E. Taylor, Tim Brys, Richard Dazeley, Peter Vamplew, Cameron Foale

Figure 1 for A Conceptual Framework for Externally-influenced Agents: An Assisted Reinforcement Learning Review
Figure 2 for A Conceptual Framework for Externally-influenced Agents: An Assisted Reinforcement Learning Review
Figure 3 for A Conceptual Framework for Externally-influenced Agents: An Assisted Reinforcement Learning Review
Figure 4 for A Conceptual Framework for Externally-influenced Agents: An Assisted Reinforcement Learning Review
Viaarxiv icon