Alert button
Picture for Will Dabney

Will Dabney

Alert button

Disentangling the Causes of Plasticity Loss in Neural Networks

Feb 29, 2024
Clare Lyle, Zeyu Zheng, Khimya Khetarpal, Hado van Hasselt, Razvan Pascanu, James Martens, Will Dabney

Viaarxiv icon

A Distributional Analogue to the Successor Representation

Feb 13, 2024
Harley Wiltzer, Jesse Farebrother, Arthur Gretton, Yunhao Tang, André Barreto, Will Dabney, Marc G. Bellemare, Mark Rowland

Viaarxiv icon

Near-Minimax-Optimal Distributional Reinforcement Learning with a Generative Model

Feb 12, 2024
Mark Rowland, Li Kevin Wenliang, Rémi Munos, Clare Lyle, Yunhao Tang, Will Dabney

Viaarxiv icon

Off-policy Distributional Q($λ$): Distributional RL without Importance Sampling

Feb 08, 2024
Yunhao Tang, Mark Rowland, Rémi Munos, Bernardo Ávila Pires, Will Dabney

Viaarxiv icon

Bootstrapped Representations in Reinforcement Learning

Jun 16, 2023
Charline Le Lan, Stephen Tu, Mark Rowland, Anna Harutyunyan, Rishabh Agarwal, Marc G. Bellemare, Will Dabney

Figure 1 for Bootstrapped Representations in Reinforcement Learning
Figure 2 for Bootstrapped Representations in Reinforcement Learning
Figure 3 for Bootstrapped Representations in Reinforcement Learning
Figure 4 for Bootstrapped Representations in Reinforcement Learning
Viaarxiv icon

The Statistical Benefits of Quantile Temporal-Difference Learning for Value Estimation

May 28, 2023
Mark Rowland, Yunhao Tang, Clare Lyle, Rémi Munos, Marc G. Bellemare, Will Dabney

Figure 1 for The Statistical Benefits of Quantile Temporal-Difference Learning for Value Estimation
Figure 2 for The Statistical Benefits of Quantile Temporal-Difference Learning for Value Estimation
Figure 3 for The Statistical Benefits of Quantile Temporal-Difference Learning for Value Estimation
Figure 4 for The Statistical Benefits of Quantile Temporal-Difference Learning for Value Estimation
Viaarxiv icon

Deep Reinforcement Learning with Plasticity Injection

May 24, 2023
Evgenii Nikishin, Junhyuk Oh, Georg Ostrovski, Clare Lyle, Razvan Pascanu, Will Dabney, André Barreto

Figure 1 for Deep Reinforcement Learning with Plasticity Injection
Figure 2 for Deep Reinforcement Learning with Plasticity Injection
Figure 3 for Deep Reinforcement Learning with Plasticity Injection
Figure 4 for Deep Reinforcement Learning with Plasticity Injection
Viaarxiv icon

Representations and Exploration for Deep Reinforcement Learning using Singular Value Decomposition

May 02, 2023
Yash Chandak, Shantanu Thakoor, Zhaohan Daniel Guo, Yunhao Tang, Remi Munos, Will Dabney, Diana L Borsa

Figure 1 for Representations and Exploration for Deep Reinforcement Learning using Singular Value Decomposition
Figure 2 for Representations and Exploration for Deep Reinforcement Learning using Singular Value Decomposition
Figure 3 for Representations and Exploration for Deep Reinforcement Learning using Singular Value Decomposition
Figure 4 for Representations and Exploration for Deep Reinforcement Learning using Singular Value Decomposition
Viaarxiv icon

Understanding plasticity in neural networks

Mar 02, 2023
Clare Lyle, Zeyu Zheng, Evgenii Nikishin, Bernardo Avila Pires, Razvan Pascanu, Will Dabney

Figure 1 for Understanding plasticity in neural networks
Figure 2 for Understanding plasticity in neural networks
Figure 3 for Understanding plasticity in neural networks
Figure 4 for Understanding plasticity in neural networks
Viaarxiv icon

An Analysis of Quantile Temporal-Difference Learning

Jan 11, 2023
Mark Rowland, Rémi Munos, Mohammad Gheshlaghi Azar, Yunhao Tang, Georg Ostrovski, Anna Harutyunyan, Karl Tuyls, Marc G. Bellemare, Will Dabney

Figure 1 for An Analysis of Quantile Temporal-Difference Learning
Figure 2 for An Analysis of Quantile Temporal-Difference Learning
Figure 3 for An Analysis of Quantile Temporal-Difference Learning
Figure 4 for An Analysis of Quantile Temporal-Difference Learning
Viaarxiv icon