Picture for Will Dabney

Will Dabney

Generalised Policy Improvement with Geometric Policy Composition

Add code
Jun 17, 2022
Figure 1 for Generalised Policy Improvement with Geometric Policy Composition
Figure 2 for Generalised Policy Improvement with Geometric Policy Composition
Figure 3 for Generalised Policy Improvement with Geometric Policy Composition
Figure 4 for Generalised Policy Improvement with Geometric Policy Composition
Viaarxiv icon

Learning Dynamics and Generalization in Reinforcement Learning

Add code
Jun 05, 2022
Figure 1 for Learning Dynamics and Generalization in Reinforcement Learning
Figure 2 for Learning Dynamics and Generalization in Reinforcement Learning
Figure 3 for Learning Dynamics and Generalization in Reinforcement Learning
Figure 4 for Learning Dynamics and Generalization in Reinforcement Learning
Viaarxiv icon

Understanding and Preventing Capacity Loss in Reinforcement Learning

Add code
Apr 20, 2022
Figure 1 for Understanding and Preventing Capacity Loss in Reinforcement Learning
Figure 2 for Understanding and Preventing Capacity Loss in Reinforcement Learning
Figure 3 for Understanding and Preventing Capacity Loss in Reinforcement Learning
Figure 4 for Understanding and Preventing Capacity Loss in Reinforcement Learning
Viaarxiv icon

On the Expressivity of Markov Reward

Add code
Nov 01, 2021
Figure 1 for On the Expressivity of Markov Reward
Figure 2 for On the Expressivity of Markov Reward
Figure 3 for On the Expressivity of Markov Reward
Figure 4 for On the Expressivity of Markov Reward
Viaarxiv icon

The Difficulty of Passive Learning in Deep Reinforcement Learning

Add code
Oct 26, 2021
Figure 1 for The Difficulty of Passive Learning in Deep Reinforcement Learning
Figure 2 for The Difficulty of Passive Learning in Deep Reinforcement Learning
Figure 3 for The Difficulty of Passive Learning in Deep Reinforcement Learning
Figure 4 for The Difficulty of Passive Learning in Deep Reinforcement Learning
Viaarxiv icon

Revisiting Peng's Q for Modern Reinforcement Learning

Add code
Feb 27, 2021
Figure 1 for Revisiting Peng's Q for Modern Reinforcement Learning
Figure 2 for Revisiting Peng's Q for Modern Reinforcement Learning
Figure 3 for Revisiting Peng's Q for Modern Reinforcement Learning
Figure 4 for Revisiting Peng's Q for Modern Reinforcement Learning
Viaarxiv icon

On The Effect of Auxiliary Tasks on Representation Dynamics

Add code
Feb 25, 2021
Figure 1 for On The Effect of Auxiliary Tasks on Representation Dynamics
Figure 2 for On The Effect of Auxiliary Tasks on Representation Dynamics
Figure 3 for On The Effect of Auxiliary Tasks on Representation Dynamics
Figure 4 for On The Effect of Auxiliary Tasks on Representation Dynamics
Viaarxiv icon

Counterfactual Credit Assignment in Model-Free Reinforcement Learning

Add code
Nov 18, 2020
Figure 1 for Counterfactual Credit Assignment in Model-Free Reinforcement Learning
Figure 2 for Counterfactual Credit Assignment in Model-Free Reinforcement Learning
Figure 3 for Counterfactual Credit Assignment in Model-Free Reinforcement Learning
Figure 4 for Counterfactual Credit Assignment in Model-Free Reinforcement Learning
Viaarxiv icon

Revisiting Fundamentals of Experience Replay

Add code
Jul 13, 2020
Figure 1 for Revisiting Fundamentals of Experience Replay
Figure 2 for Revisiting Fundamentals of Experience Replay
Figure 3 for Revisiting Fundamentals of Experience Replay
Figure 4 for Revisiting Fundamentals of Experience Replay
Viaarxiv icon

Deep Reinforcement Learning and its Neuroscientific Implications

Add code
Jul 07, 2020
Figure 1 for Deep Reinforcement Learning and its Neuroscientific Implications
Figure 2 for Deep Reinforcement Learning and its Neuroscientific Implications
Figure 3 for Deep Reinforcement Learning and its Neuroscientific Implications
Figure 4 for Deep Reinforcement Learning and its Neuroscientific Implications
Viaarxiv icon