Alert button
Picture for Jack Parker-Holder

Jack Parker-Holder

Alert button

Lyapunov Exponents for Diversity in Differentiable Games

Add code
Bookmark button
Alert button
Dec 24, 2021
Jonathan Lorraine, Paul Vicol, Jack Parker-Holder, Tal Kachman, Luke Metz, Jakob Foerster

Figure 1 for Lyapunov Exponents for Diversity in Differentiable Games
Figure 2 for Lyapunov Exponents for Diversity in Differentiable Games
Figure 3 for Lyapunov Exponents for Diversity in Differentiable Games
Figure 4 for Lyapunov Exponents for Diversity in Differentiable Games
Viaarxiv icon

Towards an Understanding of Default Policies in Multitask Policy Optimization

Add code
Bookmark button
Alert button
Nov 06, 2021
Ted Moskovitz, Michael Arbel, Jack Parker-Holder, Aldo Pacchiano

Figure 1 for Towards an Understanding of Default Policies in Multitask Policy Optimization
Figure 2 for Towards an Understanding of Default Policies in Multitask Policy Optimization
Figure 3 for Towards an Understanding of Default Policies in Multitask Policy Optimization
Figure 4 for Towards an Understanding of Default Policies in Multitask Policy Optimization
Viaarxiv icon

Revisiting Design Choices in Model-Based Offline Reinforcement Learning

Add code
Bookmark button
Alert button
Oct 08, 2021
Cong Lu, Philip J. Ball, Jack Parker-Holder, Michael A. Osborne, Stephen J. Roberts

Figure 1 for Revisiting Design Choices in Model-Based Offline Reinforcement Learning
Figure 2 for Revisiting Design Choices in Model-Based Offline Reinforcement Learning
Figure 3 for Revisiting Design Choices in Model-Based Offline Reinforcement Learning
Figure 4 for Revisiting Design Choices in Model-Based Offline Reinforcement Learning
Viaarxiv icon

Replay-Guided Adversarial Environment Design

Add code
Bookmark button
Alert button
Oct 06, 2021
Minqi Jiang, Michael Dennis, Jack Parker-Holder, Jakob Foerster, Edward Grefenstette, Tim Rocktäschel

Figure 1 for Replay-Guided Adversarial Environment Design
Figure 2 for Replay-Guided Adversarial Environment Design
Figure 3 for Replay-Guided Adversarial Environment Design
Figure 4 for Replay-Guided Adversarial Environment Design
Viaarxiv icon

MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research

Add code
Bookmark button
Alert button
Sep 27, 2021
Mikayel Samvelyan, Robert Kirk, Vitaly Kurin, Jack Parker-Holder, Minqi Jiang, Eric Hambro, Fabio Petroni, Heinrich Küttler, Edward Grefenstette, Tim Rocktäschel

Figure 1 for MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research
Figure 2 for MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research
Figure 3 for MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research
Figure 4 for MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research
Viaarxiv icon

Graph Kernel Attention Transformers

Add code
Bookmark button
Alert button
Jul 16, 2021
Krzysztof Choromanski, Han Lin, Haoxian Chen, Jack Parker-Holder

Figure 1 for Graph Kernel Attention Transformers
Figure 2 for Graph Kernel Attention Transformers
Figure 3 for Graph Kernel Attention Transformers
Figure 4 for Graph Kernel Attention Transformers
Viaarxiv icon

Tuning Mixed Input Hyperparameters on the Fly for Efficient Population Based AutoRL

Add code
Bookmark button
Alert button
Jun 30, 2021
Jack Parker-Holder, Vu Nguyen, Shaan Desai, Stephen Roberts

Figure 1 for Tuning Mixed Input Hyperparameters on the Fly for Efficient Population Based AutoRL
Figure 2 for Tuning Mixed Input Hyperparameters on the Fly for Efficient Population Based AutoRL
Figure 3 for Tuning Mixed Input Hyperparameters on the Fly for Efficient Population Based AutoRL
Figure 4 for Tuning Mixed Input Hyperparameters on the Fly for Efficient Population Based AutoRL
Viaarxiv icon

Same State, Different Task: Continual Reinforcement Learning without Interference

Add code
Bookmark button
Alert button
Jun 05, 2021
Samuel Kessler, Jack Parker-Holder, Philip Ball, Stefan Zohren, Stephen J. Roberts

Figure 1 for Same State, Different Task: Continual Reinforcement Learning without Interference
Figure 2 for Same State, Different Task: Continual Reinforcement Learning without Interference
Figure 3 for Same State, Different Task: Continual Reinforcement Learning without Interference
Figure 4 for Same State, Different Task: Continual Reinforcement Learning without Interference
Viaarxiv icon

Augmented World Models Facilitate Zero-Shot Dynamics Generalization From a Single Offline Environment

Add code
Bookmark button
Alert button
Apr 12, 2021
Philip J. Ball, Cong Lu, Jack Parker-Holder, Stephen Roberts

Figure 1 for Augmented World Models Facilitate Zero-Shot Dynamics Generalization From a Single Offline Environment
Figure 2 for Augmented World Models Facilitate Zero-Shot Dynamics Generalization From a Single Offline Environment
Figure 3 for Augmented World Models Facilitate Zero-Shot Dynamics Generalization From a Single Offline Environment
Figure 4 for Augmented World Models Facilitate Zero-Shot Dynamics Generalization From a Single Offline Environment
Viaarxiv icon