Alert button
Picture for Matteo Hessel

Matteo Hessel

Alert button

Self-Consistent Models and Values

Oct 25, 2021
Gregory Farquhar, Kate Baumli, Zita Marinho, Angelos Filos, Matteo Hessel, Hado van Hasselt, David Silver

Figure 1 for Self-Consistent Models and Values
Figure 2 for Self-Consistent Models and Values
Figure 3 for Self-Consistent Models and Values
Figure 4 for Self-Consistent Models and Values
Viaarxiv icon

Emphatic Algorithms for Deep Reinforcement Learning

Jun 21, 2021
Ray Jiang, Tom Zahavy, Zhongwen Xu, Adam White, Matteo Hessel, Charles Blundell, Hado van Hasselt

Figure 1 for Emphatic Algorithms for Deep Reinforcement Learning
Figure 2 for Emphatic Algorithms for Deep Reinforcement Learning
Figure 3 for Emphatic Algorithms for Deep Reinforcement Learning
Figure 4 for Emphatic Algorithms for Deep Reinforcement Learning
Viaarxiv icon

Podracer architectures for scalable Reinforcement Learning

Apr 13, 2021
Matteo Hessel, Manuel Kroiss, Aidan Clark, Iurii Kemaev, John Quan, Thomas Keck, Fabio Viola, Hado van Hasselt

Figure 1 for Podracer architectures for scalable Reinforcement Learning
Figure 2 for Podracer architectures for scalable Reinforcement Learning
Figure 3 for Podracer architectures for scalable Reinforcement Learning
Figure 4 for Podracer architectures for scalable Reinforcement Learning
Viaarxiv icon

Muesli: Combining Improvements in Policy Optimization

Apr 13, 2021
Matteo Hessel, Ivo Danihelka, Fabio Viola, Arthur Guez, Simon Schmitt, Laurent Sifre, Theophane Weber, David Silver, Hado van Hasselt

Figure 1 for Muesli: Combining Improvements in Policy Optimization
Figure 2 for Muesli: Combining Improvements in Policy Optimization
Figure 3 for Muesli: Combining Improvements in Policy Optimization
Figure 4 for Muesli: Combining Improvements in Policy Optimization
Viaarxiv icon

Discovery of Options via Meta-Learned Subgoals

Feb 12, 2021
Vivek Veeriah, Tom Zahavy, Matteo Hessel, Zhongwen Xu, Junhyuk Oh, Iurii Kemaev, Hado van Hasselt, David Silver, Satinder Singh

Figure 1 for Discovery of Options via Meta-Learned Subgoals
Figure 2 for Discovery of Options via Meta-Learned Subgoals
Figure 3 for Discovery of Options via Meta-Learned Subgoals
Figure 4 for Discovery of Options via Meta-Learned Subgoals
Viaarxiv icon

Discovering Reinforcement Learning Algorithms

Jul 17, 2020
Junhyuk Oh, Matteo Hessel, Wojciech M. Czarnecki, Zhongwen Xu, Hado van Hasselt, Satinder Singh, David Silver

Figure 1 for Discovering Reinforcement Learning Algorithms
Figure 2 for Discovering Reinforcement Learning Algorithms
Figure 3 for Discovering Reinforcement Learning Algorithms
Figure 4 for Discovering Reinforcement Learning Algorithms
Viaarxiv icon

Meta-Gradient Reinforcement Learning with an Objective Discovered Online

Jul 16, 2020
Zhongwen Xu, Hado van Hasselt, Matteo Hessel, Junhyuk Oh, Satinder Singh, David Silver

Figure 1 for Meta-Gradient Reinforcement Learning with an Objective Discovered Online
Figure 2 for Meta-Gradient Reinforcement Learning with an Objective Discovered Online
Figure 3 for Meta-Gradient Reinforcement Learning with an Objective Discovered Online
Figure 4 for Meta-Gradient Reinforcement Learning with an Objective Discovered Online
Viaarxiv icon

Expected Eligibility Traces

Jul 03, 2020
Hado van Hasselt, Sephora Madjiheurem, Matteo Hessel, David Silver, André Barreto, Diana Borsa

Figure 1 for Expected Eligibility Traces
Figure 2 for Expected Eligibility Traces
Figure 3 for Expected Eligibility Traces
Figure 4 for Expected Eligibility Traces
Viaarxiv icon

Self-Tuning Deep Reinforcement Learning

Mar 02, 2020
Tom Zahavy, Zhongwen Xu, Vivek Veeriah, Matteo Hessel, Junhyuk Oh, Hado van Hasselt, David Silver, Satinder Singh

Figure 1 for Self-Tuning Deep Reinforcement Learning
Figure 2 for Self-Tuning Deep Reinforcement Learning
Figure 3 for Self-Tuning Deep Reinforcement Learning
Figure 4 for Self-Tuning Deep Reinforcement Learning
Viaarxiv icon