Alert button
Picture for Jack Parker-Holder

Jack Parker-Holder

Alert button

The Surprising Effectiveness of Latent World Models for Continual Reinforcement Learning

Add code
Bookmark button
Alert button
Nov 29, 2022
Samuel Kessler, Piotr Miłoś, Jack Parker-Holder, Stephen J. Roberts

Figure 1 for The Surprising Effectiveness of Latent World Models for Continual Reinforcement Learning
Figure 2 for The Surprising Effectiveness of Latent World Models for Continual Reinforcement Learning
Figure 3 for The Surprising Effectiveness of Latent World Models for Continual Reinforcement Learning
Figure 4 for The Surprising Effectiveness of Latent World Models for Continual Reinforcement Learning
Viaarxiv icon

Learning General World Models in a Handful of Reward-Free Deployments

Add code
Bookmark button
Alert button
Oct 23, 2022
Yingchen Xu, Jack Parker-Holder, Aldo Pacchiano, Philip J. Ball, Oleh Rybkin, Stephen J. Roberts, Tim Rocktäschel, Edward Grefenstette

Figure 1 for Learning General World Models in a Handful of Reward-Free Deployments
Figure 2 for Learning General World Models in a Handful of Reward-Free Deployments
Figure 3 for Learning General World Models in a Handful of Reward-Free Deployments
Figure 4 for Learning General World Models in a Handful of Reward-Free Deployments
Viaarxiv icon

Hierarchical Kickstarting for Skill Transfer in Reinforcement Learning

Add code
Bookmark button
Alert button
Jul 23, 2022
Michael Matthews, Mikayel Samvelyan, Jack Parker-Holder, Edward Grefenstette, Tim Rocktäschel

Figure 1 for Hierarchical Kickstarting for Skill Transfer in Reinforcement Learning
Figure 2 for Hierarchical Kickstarting for Skill Transfer in Reinforcement Learning
Figure 3 for Hierarchical Kickstarting for Skill Transfer in Reinforcement Learning
Figure 4 for Hierarchical Kickstarting for Skill Transfer in Reinforcement Learning
Viaarxiv icon

Bayesian Generational Population-Based Training

Add code
Bookmark button
Alert button
Jul 19, 2022
Xingchen Wan, Cong Lu, Jack Parker-Holder, Philip J. Ball, Vu Nguyen, Binxin Ru, Michael A. Osborne

Figure 1 for Bayesian Generational Population-Based Training
Figure 2 for Bayesian Generational Population-Based Training
Figure 3 for Bayesian Generational Population-Based Training
Figure 4 for Bayesian Generational Population-Based Training
Viaarxiv icon

Grounding Aleatoric Uncertainty in Unsupervised Environment Design

Add code
Bookmark button
Alert button
Jul 11, 2022
Minqi Jiang, Michael Dennis, Jack Parker-Holder, Andrei Lupu, Heinrich Küttler, Edward Grefenstette, Tim Rocktäschel, Jakob Foerster

Figure 1 for Grounding Aleatoric Uncertainty in Unsupervised Environment Design
Figure 2 for Grounding Aleatoric Uncertainty in Unsupervised Environment Design
Figure 3 for Grounding Aleatoric Uncertainty in Unsupervised Environment Design
Figure 4 for Grounding Aleatoric Uncertainty in Unsupervised Environment Design
Viaarxiv icon

Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations

Add code
Bookmark button
Alert button
Jun 09, 2022
Cong Lu, Philip J. Ball, Tim G. J. Rudner, Jack Parker-Holder, Michael A. Osborne, Yee Whye Teh

Figure 1 for Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations
Figure 2 for Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations
Figure 3 for Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations
Figure 4 for Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations
Viaarxiv icon

Insights From the NeurIPS 2021 NetHack Challenge

Add code
Bookmark button
Alert button
Mar 22, 2022
Eric Hambro, Sharada Mohanty, Dmitrii Babaev, Minwoo Byeon, Dipam Chakraborty, Edward Grefenstette, Minqi Jiang, Daejin Jo, Anssi Kanervisto, Jongmin Kim, Sungwoong Kim, Robert Kirk, Vitaly Kurin, Heinrich Küttler, Taehwon Kwon, Donghoon Lee, Vegard Mella, Nantas Nardelli, Ivan Nazarov, Nikita Ovsov, Jack Parker-Holder, Roberta Raileanu, Karolis Ramanauskas, Tim Rocktäschel, Danielle Rothermel, Mikayel Samvelyan, Dmitry Sorokin, Maciej Sypetkowski, Michał Sypetkowski

Figure 1 for Insights From the NeurIPS 2021 NetHack Challenge
Figure 2 for Insights From the NeurIPS 2021 NetHack Challenge
Figure 3 for Insights From the NeurIPS 2021 NetHack Challenge
Figure 4 for Insights From the NeurIPS 2021 NetHack Challenge
Viaarxiv icon

Evolving Curricula with Regret-Based Environment Design

Add code
Bookmark button
Alert button
Mar 08, 2022
Jack Parker-Holder, Minqi Jiang, Michael Dennis, Mikayel Samvelyan, Jakob Foerster, Edward Grefenstette, Tim Rocktäschel

Figure 1 for Evolving Curricula with Regret-Based Environment Design
Figure 2 for Evolving Curricula with Regret-Based Environment Design
Figure 3 for Evolving Curricula with Regret-Based Environment Design
Figure 4 for Evolving Curricula with Regret-Based Environment Design
Viaarxiv icon

On-the-fly Strategy Adaptation for ad-hoc Agent Coordination

Add code
Bookmark button
Alert button
Mar 08, 2022
Jaleh Zand, Jack Parker-Holder, Stephen J. Roberts

Figure 1 for On-the-fly Strategy Adaptation for ad-hoc Agent Coordination
Figure 2 for On-the-fly Strategy Adaptation for ad-hoc Agent Coordination
Figure 3 for On-the-fly Strategy Adaptation for ad-hoc Agent Coordination
Figure 4 for On-the-fly Strategy Adaptation for ad-hoc Agent Coordination
Viaarxiv icon

Automated Reinforcement Learning (AutoRL): A Survey and Open Problems

Add code
Bookmark button
Alert button
Jan 11, 2022
Jack Parker-Holder, Raghu Rajan, Xingyou Song, André Biedenkapp, Yingjie Miao, Theresa Eimer, Baohe Zhang, Vu Nguyen, Roberto Calandra, Aleksandra Faust, Frank Hutter, Marius Lindauer

Figure 1 for Automated Reinforcement Learning (AutoRL): A Survey and Open Problems
Figure 2 for Automated Reinforcement Learning (AutoRL): A Survey and Open Problems
Figure 3 for Automated Reinforcement Learning (AutoRL): A Survey and Open Problems
Figure 4 for Automated Reinforcement Learning (AutoRL): A Survey and Open Problems
Viaarxiv icon