Alert button
Picture for Kai Arulkumaran

Kai Arulkumaran

Alert button

Preference-Learning Emitters for Mixed-Initiative Quality-Diversity Algorithms

Add code
Bookmark button
Alert button
Oct 25, 2022
Roberto Gallotta, Kai Arulkumaran, L. B. Soros

Figure 1 for Preference-Learning Emitters for Mixed-Initiative Quality-Diversity Algorithms
Figure 2 for Preference-Learning Emitters for Mixed-Initiative Quality-Diversity Algorithms
Figure 3 for Preference-Learning Emitters for Mixed-Initiative Quality-Diversity Algorithms
Figure 4 for Preference-Learning Emitters for Mixed-Initiative Quality-Diversity Algorithms
Viaarxiv icon

Surrogate Infeasible Fitness Acquirement FI-2Pop for Procedural Content Generation

Add code
Bookmark button
Alert button
May 12, 2022
Roberto Gallotta, Kai Arulkumaran, L. B. Soros

Figure 1 for Surrogate Infeasible Fitness Acquirement FI-2Pop for Procedural Content Generation
Figure 2 for Surrogate Infeasible Fitness Acquirement FI-2Pop for Procedural Content Generation
Figure 3 for Surrogate Infeasible Fitness Acquirement FI-2Pop for Procedural Content Generation
Figure 4 for Surrogate Infeasible Fitness Acquirement FI-2Pop for Procedural Content Generation
Viaarxiv icon

On the link between conscious function and general intelligence in humans and machines

Add code
Bookmark button
Alert button
Mar 24, 2022
Arthur Juliani, Kai Arulkumaran, Shuntaro Sasai, Ryota Kanai

Figure 1 for On the link between conscious function and general intelligence in humans and machines
Figure 2 for On the link between conscious function and general intelligence in humans and machines
Figure 3 for On the link between conscious function and general intelligence in humans and machines
Viaarxiv icon

All You Need Is Supervised Learning: From Imitation Learning to Meta-RL With Upside Down RL

Add code
Bookmark button
Alert button
Feb 24, 2022
Kai Arulkumaran, Dylan R. Ashley, Jürgen Schmidhuber, Rupesh K. Srivastava

Figure 1 for All You Need Is Supervised Learning: From Imitation Learning to Meta-RL With Upside Down RL
Viaarxiv icon

Learning Relative Return Policies With Upside-Down Reinforcement Learning

Add code
Bookmark button
Alert button
Feb 23, 2022
Dylan R. Ashley, Kai Arulkumaran, Jürgen Schmidhuber, Rupesh Kumar Srivastava

Figure 1 for Learning Relative Return Policies With Upside-Down Reinforcement Learning
Figure 2 for Learning Relative Return Policies With Upside-Down Reinforcement Learning
Viaarxiv icon

Diversity-based Trajectory and Goal Selection with Hindsight Experience Replay

Add code
Bookmark button
Alert button
Aug 17, 2021
Tianhong Dai, Hengyan Liu, Kai Arulkumaran, Guangyu Ren, Anil Anthony Bharath

Figure 1 for Diversity-based Trajectory and Goal Selection with Hindsight Experience Replay
Figure 2 for Diversity-based Trajectory and Goal Selection with Hindsight Experience Replay
Figure 3 for Diversity-based Trajectory and Goal Selection with Hindsight Experience Replay
Figure 4 for Diversity-based Trajectory and Goal Selection with Hindsight Experience Replay
Viaarxiv icon

A Pragmatic Look at Deep Imitation Learning

Add code
Bookmark button
Alert button
Aug 04, 2021
Kai Arulkumaran, Dan Ogawa Lillrank

Figure 1 for A Pragmatic Look at Deep Imitation Learning
Figure 2 for A Pragmatic Look at Deep Imitation Learning
Figure 3 for A Pragmatic Look at Deep Imitation Learning
Viaarxiv icon

Privileged Information Dropout in Reinforcement Learning

Add code
Bookmark button
Alert button
May 19, 2020
Pierre-Alexandre Kamienny, Kai Arulkumaran, Feryal Behbahani, Wendelin Boehmer, Shimon Whiteson

Figure 1 for Privileged Information Dropout in Reinforcement Learning
Figure 2 for Privileged Information Dropout in Reinforcement Learning
Figure 3 for Privileged Information Dropout in Reinforcement Learning
Figure 4 for Privileged Information Dropout in Reinforcement Learning
Viaarxiv icon

Analysing Deep Reinforcement Learning Agents Trained with Domain Randomisation

Add code
Bookmark button
Alert button
Dec 18, 2019
Tianhong Dai, Kai Arulkumaran, Samyakh Tukra, Feryal Behbahani, Anil Anthony Bharath

Figure 1 for Analysing Deep Reinforcement Learning Agents Trained with Domain Randomisation
Figure 2 for Analysing Deep Reinforcement Learning Agents Trained with Domain Randomisation
Figure 3 for Analysing Deep Reinforcement Learning Agents Trained with Domain Randomisation
Figure 4 for Analysing Deep Reinforcement Learning Agents Trained with Domain Randomisation
Viaarxiv icon

Sample-Efficient Reinforcement Learning with Maximum Entropy Mellowmax Episodic Control

Add code
Bookmark button
Alert button
Nov 21, 2019
Marta Sarrico, Kai Arulkumaran, Andrea Agostinelli, Pierre Richemond, Anil Anthony Bharath

Figure 1 for Sample-Efficient Reinforcement Learning with Maximum Entropy Mellowmax Episodic Control
Figure 2 for Sample-Efficient Reinforcement Learning with Maximum Entropy Mellowmax Episodic Control
Figure 3 for Sample-Efficient Reinforcement Learning with Maximum Entropy Mellowmax Episodic Control
Figure 4 for Sample-Efficient Reinforcement Learning with Maximum Entropy Mellowmax Episodic Control
Viaarxiv icon