Alert button
Picture for Mike Preuss

Mike Preuss

Alert button

Memory Gym: Partially Observable Challenges to Memory-Based Agents in Endless Episodes

Add code
Bookmark button
Alert button
Sep 29, 2023
Marco Pleines, Matthias Pallasch, Frank Zimmer, Mike Preuss

Figure 1 for Memory Gym: Partially Observable Challenges to Memory-Based Agents in Endless Episodes
Figure 2 for Memory Gym: Partially Observable Challenges to Memory-Based Agents in Endless Episodes
Figure 3 for Memory Gym: Partially Observable Challenges to Memory-Based Agents in Endless Episodes
Figure 4 for Memory Gym: Partially Observable Challenges to Memory-Based Agents in Endless Episodes
Viaarxiv icon

Believable Minecraft Settlements by Means of Decentralised Iterative Planning

Add code
Bookmark button
Alert button
Sep 19, 2023
Arthur van der Staaij, Jelmer Prins, Vincent L. Prins, Julian Poelsma, Thera Smit, Matthias Müller-Brockhausen, Mike Preuss

Figure 1 for Believable Minecraft Settlements by Means of Decentralised Iterative Planning
Figure 2 for Believable Minecraft Settlements by Means of Decentralised Iterative Planning
Figure 3 for Believable Minecraft Settlements by Means of Decentralised Iterative Planning
Figure 4 for Believable Minecraft Settlements by Means of Decentralised Iterative Planning
Viaarxiv icon

Models Matter: The Impact of Single-Step Retrosynthesis on Synthesis Planning

Add code
Bookmark button
Alert button
Aug 10, 2023
Paula Torren-Peraire, Alan Kai Hassen, Samuel Genheden, Jonas Verhoeven, Djork-Arne Clevert, Mike Preuss, Igor Tetko

Figure 1 for Models Matter: The Impact of Single-Step Retrosynthesis on Synthesis Planning
Figure 2 for Models Matter: The Impact of Single-Step Retrosynthesis on Synthesis Planning
Figure 3 for Models Matter: The Impact of Single-Step Retrosynthesis on Synthesis Planning
Figure 4 for Models Matter: The Impact of Single-Step Retrosynthesis on Synthesis Planning
Viaarxiv icon

Two-Memory Reinforcement Learning

Add code
Bookmark button
Alert button
Apr 23, 2023
Zhao Yang, Thomas. M. Moerland, Mike Preuss, Aske Plaat

Figure 1 for Two-Memory Reinforcement Learning
Figure 2 for Two-Memory Reinforcement Learning
Figure 3 for Two-Memory Reinforcement Learning
Figure 4 for Two-Memory Reinforcement Learning
Viaarxiv icon

Mind the Retrosynthesis Gap: Bridging the divide between Single-step and Multi-step Retrosynthesis Prediction

Add code
Bookmark button
Alert button
Dec 12, 2022
Alan Kai Hassen, Paula Torren-Peraire, Samuel Genheden, Jonas Verhoeven, Mike Preuss, Igor Tetko

Figure 1 for Mind the Retrosynthesis Gap: Bridging the divide between Single-step and Multi-step Retrosynthesis Prediction
Figure 2 for Mind the Retrosynthesis Gap: Bridging the divide between Single-step and Multi-step Retrosynthesis Prediction
Figure 3 for Mind the Retrosynthesis Gap: Bridging the divide between Single-step and Multi-step Retrosynthesis Prediction
Viaarxiv icon

First Go, then Post-Explore: the Benefits of Post-Exploration in Intrinsic Motivation

Add code
Bookmark button
Alert button
Dec 06, 2022
Zhao Yang, Thomas M. Moerland, Mike Preuss, Aske Plaat

Figure 1 for First Go, then Post-Explore: the Benefits of Post-Exploration in Intrinsic Motivation
Figure 2 for First Go, then Post-Explore: the Benefits of Post-Exploration in Intrinsic Motivation
Figure 3 for First Go, then Post-Explore: the Benefits of Post-Exploration in Intrinsic Motivation
Figure 4 for First Go, then Post-Explore: the Benefits of Post-Exploration in Intrinsic Motivation
Viaarxiv icon

Continuous Episodic Control

Add code
Bookmark button
Alert button
Nov 28, 2022
Zhao Yang, Thomas M. Moerland, Mike Preuss, Aske Plaat

Figure 1 for Continuous Episodic Control
Figure 2 for Continuous Episodic Control
Figure 3 for Continuous Episodic Control
Figure 4 for Continuous Episodic Control
Viaarxiv icon

On the Verge of Solving Rocket League using Deep Reinforcement Learning and Sim-to-sim Transfer

Add code
Bookmark button
Alert button
May 24, 2022
Marco Pleines, Konstantin Ramthun, Yannik Wegener, Hendrik Meyer, Matthias Pallasch, Sebastian Prior, Jannik Drögemüller, Leon Büttinghaus, Thilo Röthemeyer, Alexander Kaschwig, Oliver Chmurzynski, Frederik Rohkrähmer, Roman Kalkreuth, Frank Zimmer, Mike Preuss

Figure 1 for On the Verge of Solving Rocket League using Deep Reinforcement Learning and Sim-to-sim Transfer
Figure 2 for On the Verge of Solving Rocket League using Deep Reinforcement Learning and Sim-to-sim Transfer
Figure 3 for On the Verge of Solving Rocket League using Deep Reinforcement Learning and Sim-to-sim Transfer
Figure 4 for On the Verge of Solving Rocket League using Deep Reinforcement Learning and Sim-to-sim Transfer
Viaarxiv icon

Generalization, Mayhems and Limits in Recurrent Proximal Policy Optimization

Add code
Bookmark button
Alert button
May 23, 2022
Marco Pleines, Matthias Pallasch, Frank Zimmer, Mike Preuss

Figure 1 for Generalization, Mayhems and Limits in Recurrent Proximal Policy Optimization
Figure 2 for Generalization, Mayhems and Limits in Recurrent Proximal Policy Optimization
Figure 3 for Generalization, Mayhems and Limits in Recurrent Proximal Policy Optimization
Figure 4 for Generalization, Mayhems and Limits in Recurrent Proximal Policy Optimization
Viaarxiv icon