Alert button
Picture for Mark Rowland

Mark Rowland

Alert button

Learning Dynamics and Generalization in Reinforcement Learning

Add code
Bookmark button
Alert button
Jun 05, 2022
Clare Lyle, Mark Rowland, Will Dabney, Marta Kwiatkowska, Yarin Gal

Figure 1 for Learning Dynamics and Generalization in Reinforcement Learning
Figure 2 for Learning Dynamics and Generalization in Reinforcement Learning
Figure 3 for Learning Dynamics and Generalization in Reinforcement Learning
Figure 4 for Learning Dynamics and Generalization in Reinforcement Learning
Viaarxiv icon

Understanding and Preventing Capacity Loss in Reinforcement Learning

Add code
Bookmark button
Alert button
Apr 20, 2022
Clare Lyle, Mark Rowland, Will Dabney

Figure 1 for Understanding and Preventing Capacity Loss in Reinforcement Learning
Figure 2 for Understanding and Preventing Capacity Loss in Reinforcement Learning
Figure 3 for Understanding and Preventing Capacity Loss in Reinforcement Learning
Figure 4 for Understanding and Preventing Capacity Loss in Reinforcement Learning
Viaarxiv icon

Marginalized Operators for Off-policy Reinforcement Learning

Add code
Bookmark button
Alert button
Mar 30, 2022
Yunhao Tang, Mark Rowland, Rémi Munos, Michal Valko

Figure 1 for Marginalized Operators for Off-policy Reinforcement Learning
Figure 2 for Marginalized Operators for Off-policy Reinforcement Learning
Figure 3 for Marginalized Operators for Off-policy Reinforcement Learning
Figure 4 for Marginalized Operators for Off-policy Reinforcement Learning
Viaarxiv icon

Evolutionary Dynamics and $Φ$-Regret Minimization in Games

Add code
Bookmark button
Alert button
Jun 28, 2021
Georgios Piliouras, Mark Rowland, Shayegan Omidshafiei, Romuald Elie, Daniel Hennes, Jerome Connor, Karl Tuyls

Figure 1 for Evolutionary Dynamics and $Φ$-Regret Minimization in Games
Figure 2 for Evolutionary Dynamics and $Φ$-Regret Minimization in Games
Figure 3 for Evolutionary Dynamics and $Φ$-Regret Minimization in Games
Figure 4 for Evolutionary Dynamics and $Φ$-Regret Minimization in Games
Viaarxiv icon

Unifying Gradient Estimators for Meta-Reinforcement Learning via Off-Policy Evaluation

Add code
Bookmark button
Alert button
Jun 24, 2021
Yunhao Tang, Tadashi Kozuno, Mark Rowland, Rémi Munos, Michal Valko

Figure 1 for Unifying Gradient Estimators for Meta-Reinforcement Learning via Off-Policy Evaluation
Figure 2 for Unifying Gradient Estimators for Meta-Reinforcement Learning via Off-Policy Evaluation
Figure 3 for Unifying Gradient Estimators for Meta-Reinforcement Learning via Off-Policy Evaluation
Figure 4 for Unifying Gradient Estimators for Meta-Reinforcement Learning via Off-Policy Evaluation
Viaarxiv icon

Taylor Expansion of Discount Factors

Add code
Bookmark button
Alert button
Jun 14, 2021
Yunhao Tang, Mark Rowland, Rémi Munos, Michal Valko

Figure 1 for Taylor Expansion of Discount Factors
Figure 2 for Taylor Expansion of Discount Factors
Figure 3 for Taylor Expansion of Discount Factors
Figure 4 for Taylor Expansion of Discount Factors
Viaarxiv icon

MICo: Learning improved representations via sampling-based state similarity for Markov decision processes

Add code
Bookmark button
Alert button
Jun 03, 2021
Pablo Samuel Castro, Tyler Kastner, Prakash Panangaden, Mark Rowland

Figure 1 for MICo: Learning improved representations via sampling-based state similarity for Markov decision processes
Figure 2 for MICo: Learning improved representations via sampling-based state similarity for Markov decision processes
Figure 3 for MICo: Learning improved representations via sampling-based state similarity for Markov decision processes
Figure 4 for MICo: Learning improved representations via sampling-based state similarity for Markov decision processes
Viaarxiv icon

Revisiting Peng's Q($λ$) for Modern Reinforcement Learning

Add code
Bookmark button
Alert button
Feb 27, 2021
Tadashi Kozuno, Yunhao Tang, Mark Rowland, Rémi Munos, Steven Kapturowski, Will Dabney, Michal Valko, David Abel

Figure 1 for Revisiting Peng's Q($λ$) for Modern Reinforcement Learning
Figure 2 for Revisiting Peng's Q($λ$) for Modern Reinforcement Learning
Figure 3 for Revisiting Peng's Q($λ$) for Modern Reinforcement Learning
Figure 4 for Revisiting Peng's Q($λ$) for Modern Reinforcement Learning
Viaarxiv icon

On The Effect of Auxiliary Tasks on Representation Dynamics

Add code
Bookmark button
Alert button
Feb 25, 2021
Clare Lyle, Mark Rowland, Georg Ostrovski, Will Dabney

Figure 1 for On The Effect of Auxiliary Tasks on Representation Dynamics
Figure 2 for On The Effect of Auxiliary Tasks on Representation Dynamics
Figure 3 for On The Effect of Auxiliary Tasks on Representation Dynamics
Figure 4 for On The Effect of Auxiliary Tasks on Representation Dynamics
Viaarxiv icon