Alert button
Picture for Mark Rowland

Mark Rowland

Alert button

DoMo-AC: Doubly Multi-step Off-policy Actor-Critic Algorithm

Add code
Bookmark button
Alert button
May 29, 2023
Yunhao Tang, Tadashi Kozuno, Mark Rowland, Anna Harutyunyan, Rémi Munos, Bernardo Ávila Pires, Michal Valko

Figure 1 for DoMo-AC: Doubly Multi-step Off-policy Actor-Critic Algorithm
Figure 2 for DoMo-AC: Doubly Multi-step Off-policy Actor-Critic Algorithm
Figure 3 for DoMo-AC: Doubly Multi-step Off-policy Actor-Critic Algorithm
Figure 4 for DoMo-AC: Doubly Multi-step Off-policy Actor-Critic Algorithm
Viaarxiv icon

The Statistical Benefits of Quantile Temporal-Difference Learning for Value Estimation

Add code
Bookmark button
Alert button
May 28, 2023
Mark Rowland, Yunhao Tang, Clare Lyle, Rémi Munos, Marc G. Bellemare, Will Dabney

Figure 1 for The Statistical Benefits of Quantile Temporal-Difference Learning for Value Estimation
Figure 2 for The Statistical Benefits of Quantile Temporal-Difference Learning for Value Estimation
Figure 3 for The Statistical Benefits of Quantile Temporal-Difference Learning for Value Estimation
Figure 4 for The Statistical Benefits of Quantile Temporal-Difference Learning for Value Estimation
Viaarxiv icon

An Analysis of Quantile Temporal-Difference Learning

Add code
Bookmark button
Alert button
Jan 11, 2023
Mark Rowland, Rémi Munos, Mohammad Gheshlaghi Azar, Yunhao Tang, Georg Ostrovski, Anna Harutyunyan, Karl Tuyls, Marc G. Bellemare, Will Dabney

Figure 1 for An Analysis of Quantile Temporal-Difference Learning
Figure 2 for An Analysis of Quantile Temporal-Difference Learning
Figure 3 for An Analysis of Quantile Temporal-Difference Learning
Figure 4 for An Analysis of Quantile Temporal-Difference Learning
Viaarxiv icon

A Novel Stochastic Gradient Descent Algorithm for Learning Principal Subspaces

Add code
Bookmark button
Alert button
Dec 08, 2022
Charline Le Lan, Joshua Greaves, Jesse Farebrother, Mark Rowland, Fabian Pedregosa, Rishabh Agarwal, Marc G. Bellemare

Figure 1 for A Novel Stochastic Gradient Descent Algorithm for Learning Principal Subspaces
Figure 2 for A Novel Stochastic Gradient Descent Algorithm for Learning Principal Subspaces
Figure 3 for A Novel Stochastic Gradient Descent Algorithm for Learning Principal Subspaces
Figure 4 for A Novel Stochastic Gradient Descent Algorithm for Learning Principal Subspaces
Viaarxiv icon

Understanding Self-Predictive Learning for Reinforcement Learning

Add code
Bookmark button
Alert button
Dec 06, 2022
Yunhao Tang, Zhaohan Daniel Guo, Pierre Harvey Richemond, Bernardo Ávila Pires, Yash Chandak, Rémi Munos, Mark Rowland, Mohammad Gheshlaghi Azar, Charline Le Lan, Clare Lyle, András György, Shantanu Thakoor, Will Dabney, Bilal Piot, Daniele Calandriello, Michal Valko

Figure 1 for Understanding Self-Predictive Learning for Reinforcement Learning
Figure 2 for Understanding Self-Predictive Learning for Reinforcement Learning
Figure 3 for Understanding Self-Predictive Learning for Reinforcement Learning
Figure 4 for Understanding Self-Predictive Learning for Reinforcement Learning
Viaarxiv icon

Optimistic Posterior Sampling for Reinforcement Learning with Few Samples and Tight Guarantees

Add code
Bookmark button
Alert button
Sep 28, 2022
Daniil Tiapkin, Denis Belomestny, Daniele Calandriello, Eric Moulines, Remi Munos, Alexey Naumov, Mark Rowland, Michal Valko, Pierre Menard

Figure 1 for Optimistic Posterior Sampling for Reinforcement Learning with Few Samples and Tight Guarantees
Figure 2 for Optimistic Posterior Sampling for Reinforcement Learning with Few Samples and Tight Guarantees
Figure 3 for Optimistic Posterior Sampling for Reinforcement Learning with Few Samples and Tight Guarantees
Figure 4 for Optimistic Posterior Sampling for Reinforcement Learning with Few Samples and Tight Guarantees
Viaarxiv icon

Learning Correlated Equilibria in Mean-Field Games

Add code
Bookmark button
Alert button
Aug 22, 2022
Paul Muller, Romuald Elie, Mark Rowland, Mathieu Lauriere, Julien Perolat, Sarah Perrin, Matthieu Geist, Georgios Piliouras, Olivier Pietquin, Karl Tuyls

Figure 1 for Learning Correlated Equilibria in Mean-Field Games
Figure 2 for Learning Correlated Equilibria in Mean-Field Games
Figure 3 for Learning Correlated Equilibria in Mean-Field Games
Figure 4 for Learning Correlated Equilibria in Mean-Field Games
Viaarxiv icon

The Nature of Temporal Difference Errors in Multi-step Distributional Reinforcement Learning

Add code
Bookmark button
Alert button
Jul 15, 2022
Yunhao Tang, Mark Rowland, Rémi Munos, Bernardo Ávila Pires, Will Dabney, Marc G. Bellemare

Figure 1 for The Nature of Temporal Difference Errors in Multi-step Distributional Reinforcement Learning
Figure 2 for The Nature of Temporal Difference Errors in Multi-step Distributional Reinforcement Learning
Figure 3 for The Nature of Temporal Difference Errors in Multi-step Distributional Reinforcement Learning
Figure 4 for The Nature of Temporal Difference Errors in Multi-step Distributional Reinforcement Learning
Viaarxiv icon

Mastering the Game of Stratego with Model-Free Multiagent Reinforcement Learning

Add code
Bookmark button
Alert button
Jun 30, 2022
Julien Perolat, Bart de Vylder, Daniel Hennes, Eugene Tarassov, Florian Strub, Vincent de Boer, Paul Muller, Jerome T. Connor, Neil Burch, Thomas Anthony, Stephen McAleer, Romuald Elie, Sarah H. Cen, Zhe Wang, Audrunas Gruslys, Aleksandra Malysheva, Mina Khan, Sherjil Ozair, Finbarr Timbers, Toby Pohlen, Tom Eccles, Mark Rowland, Marc Lanctot, Jean-Baptiste Lespiau, Bilal Piot, Shayegan Omidshafiei, Edward Lockhart, Laurent Sifre, Nathalie Beauguerlange, Remi Munos, David Silver, Satinder Singh, Demis Hassabis, Karl Tuyls

Figure 1 for Mastering the Game of Stratego with Model-Free Multiagent Reinforcement Learning
Figure 2 for Mastering the Game of Stratego with Model-Free Multiagent Reinforcement Learning
Figure 3 for Mastering the Game of Stratego with Model-Free Multiagent Reinforcement Learning
Figure 4 for Mastering the Game of Stratego with Model-Free Multiagent Reinforcement Learning
Viaarxiv icon

Generalised Policy Improvement with Geometric Policy Composition

Add code
Bookmark button
Alert button
Jun 17, 2022
Shantanu Thakoor, Mark Rowland, Diana Borsa, Will Dabney, Rémi Munos, André Barreto

Figure 1 for Generalised Policy Improvement with Geometric Policy Composition
Figure 2 for Generalised Policy Improvement with Geometric Policy Composition
Figure 3 for Generalised Policy Improvement with Geometric Policy Composition
Figure 4 for Generalised Policy Improvement with Geometric Policy Composition
Viaarxiv icon