Alert button
Picture for Changmin Yu

Changmin Yu

Alert button

Successor-Predecessor Intrinsic Exploration

Add code
Bookmark button
Alert button
May 24, 2023
Changmin Yu, Neil Burgess, Maneesh Sahani, Sam Gershman

Figure 1 for Successor-Predecessor Intrinsic Exploration
Figure 2 for Successor-Predecessor Intrinsic Exploration
Figure 3 for Successor-Predecessor Intrinsic Exploration
Figure 4 for Successor-Predecessor Intrinsic Exploration
Viaarxiv icon

Unsupervised representational learning with recognition-parametrised probabilistic models

Add code
Bookmark button
Alert button
Sep 13, 2022
William I. Walker, Hugo Soulat, Changmin Yu, Maneesh Sahani

Figure 1 for Unsupervised representational learning with recognition-parametrised probabilistic models
Figure 2 for Unsupervised representational learning with recognition-parametrised probabilistic models
Figure 3 for Unsupervised representational learning with recognition-parametrised probabilistic models
Viaarxiv icon

Amortised Inference in Structured Generative Models with Explaining Away

Add code
Bookmark button
Alert button
Sep 12, 2022
Changmin Yu, Hugo Soulat, Neil Burgess, Maneesh Sahani

Figure 1 for Amortised Inference in Structured Generative Models with Explaining Away
Figure 2 for Amortised Inference in Structured Generative Models with Explaining Away
Figure 3 for Amortised Inference in Structured Generative Models with Explaining Away
Figure 4 for Amortised Inference in Structured Generative Models with Explaining Away
Viaarxiv icon

SEREN: Knowing When to Explore and When to Exploit

Add code
Bookmark button
Alert button
May 30, 2022
Changmin Yu, David Mguni, Dong Li, Aivar Sootla, Jun Wang, Neil Burgess

Figure 1 for SEREN: Knowing When to Explore and When to Exploit
Figure 2 for SEREN: Knowing When to Explore and When to Exploit
Figure 3 for SEREN: Knowing When to Explore and When to Exploit
Figure 4 for SEREN: Knowing When to Explore and When to Exploit
Viaarxiv icon

Learning State Representations via Retracing in Reinforcement Learning

Add code
Bookmark button
Alert button
Nov 24, 2021
Changmin Yu, Dong Li, Jianye Hao, Jun Wang, Neil Burgess

Figure 1 for Learning State Representations via Retracing in Reinforcement Learning
Figure 2 for Learning State Representations via Retracing in Reinforcement Learning
Figure 3 for Learning State Representations via Retracing in Reinforcement Learning
Figure 4 for Learning State Representations via Retracing in Reinforcement Learning
Viaarxiv icon

DESTA: A Framework for Safe Reinforcement Learning with Markov Games of Intervention

Add code
Bookmark button
Alert button
Oct 27, 2021
David Mguni, Joel Jennings, Taher Jafferjee, Aivar Sootla, Yaodong Yang, Changmin Yu, Usman Islam, Ziyan Wang, Jun Wang

Figure 1 for DESTA: A Framework for Safe Reinforcement Learning with Markov Games of Intervention
Figure 2 for DESTA: A Framework for Safe Reinforcement Learning with Markov Games of Intervention
Figure 3 for DESTA: A Framework for Safe Reinforcement Learning with Markov Games of Intervention
Figure 4 for DESTA: A Framework for Safe Reinforcement Learning with Markov Games of Intervention
Viaarxiv icon

Prediction with directed transitions: complex eigenstructure, grid cells and phase coding

Add code
Bookmark button
Alert button
Jun 05, 2020
Changmin Yu, Timothy E. J. Behrens, Neil Burgess

Figure 1 for Prediction with directed transitions: complex eigenstructure, grid cells and phase coding
Figure 2 for Prediction with directed transitions: complex eigenstructure, grid cells and phase coding
Figure 3 for Prediction with directed transitions: complex eigenstructure, grid cells and phase coding
Figure 4 for Prediction with directed transitions: complex eigenstructure, grid cells and phase coding
Viaarxiv icon