Alert button
Picture for Marc G. Bellemare

Marc G. Bellemare

Alert button

An Introduction to Deep Reinforcement Learning

Add code
Bookmark button
Alert button
Dec 03, 2018
Vincent Francois-Lavet, Peter Henderson, Riashat Islam, Marc G. Bellemare, Joelle Pineau

Figure 1 for An Introduction to Deep Reinforcement Learning
Figure 2 for An Introduction to Deep Reinforcement Learning
Figure 3 for An Introduction to Deep Reinforcement Learning
Figure 4 for An Introduction to Deep Reinforcement Learning
Viaarxiv icon

Approximate Exploration through State Abstraction

Add code
Bookmark button
Alert button
Aug 29, 2018
Adrien Ali Taïga, Aaron Courville, Marc G. Bellemare

Figure 1 for Approximate Exploration through State Abstraction
Figure 2 for Approximate Exploration through State Abstraction
Figure 3 for Approximate Exploration through State Abstraction
Figure 4 for Approximate Exploration through State Abstraction
Viaarxiv icon

Count-Based Exploration with the Successor Representation

Add code
Bookmark button
Alert button
Aug 14, 2018
Marlos C. Machado, Marc G. Bellemare, Michael Bowling

Figure 1 for Count-Based Exploration with the Successor Representation
Figure 2 for Count-Based Exploration with the Successor Representation
Figure 3 for Count-Based Exploration with the Successor Representation
Figure 4 for Count-Based Exploration with the Successor Representation
Viaarxiv icon

An Analysis of Categorical Distributional Reinforcement Learning

Add code
Bookmark button
Alert button
Feb 22, 2018
Mark Rowland, Marc G. Bellemare, Will Dabney, Rémi Munos, Yee Whye Teh

Figure 1 for An Analysis of Categorical Distributional Reinforcement Learning
Figure 2 for An Analysis of Categorical Distributional Reinforcement Learning
Viaarxiv icon

Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents

Add code
Bookmark button
Alert button
Dec 01, 2017
Marlos C. Machado, Marc G. Bellemare, Erik Talvitie, Joel Veness, Matthew Hausknecht, Michael Bowling

Figure 1 for Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents
Figure 2 for Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents
Figure 3 for Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents
Figure 4 for Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents
Viaarxiv icon

Distributional Reinforcement Learning with Quantile Regression

Add code
Bookmark button
Alert button
Oct 27, 2017
Will Dabney, Mark Rowland, Marc G. Bellemare, Rémi Munos

Figure 1 for Distributional Reinforcement Learning with Quantile Regression
Figure 2 for Distributional Reinforcement Learning with Quantile Regression
Figure 3 for Distributional Reinforcement Learning with Quantile Regression
Figure 4 for Distributional Reinforcement Learning with Quantile Regression
Viaarxiv icon

A Distributional Perspective on Reinforcement Learning

Add code
Bookmark button
Alert button
Jul 21, 2017
Marc G. Bellemare, Will Dabney, Rémi Munos

Figure 1 for A Distributional Perspective on Reinforcement Learning
Figure 2 for A Distributional Perspective on Reinforcement Learning
Figure 3 for A Distributional Perspective on Reinforcement Learning
Figure 4 for A Distributional Perspective on Reinforcement Learning
Viaarxiv icon

A Laplacian Framework for Option Discovery in Reinforcement Learning

Add code
Bookmark button
Alert button
Jun 16, 2017
Marlos C. Machado, Marc G. Bellemare, Michael Bowling

Figure 1 for A Laplacian Framework for Option Discovery in Reinforcement Learning
Figure 2 for A Laplacian Framework for Option Discovery in Reinforcement Learning
Figure 3 for A Laplacian Framework for Option Discovery in Reinforcement Learning
Figure 4 for A Laplacian Framework for Option Discovery in Reinforcement Learning
Viaarxiv icon

Count-Based Exploration with Neural Density Models

Add code
Bookmark button
Alert button
Jun 14, 2017
Georg Ostrovski, Marc G. Bellemare, Aaron van den Oord, Remi Munos

Figure 1 for Count-Based Exploration with Neural Density Models
Figure 2 for Count-Based Exploration with Neural Density Models
Figure 3 for Count-Based Exploration with Neural Density Models
Figure 4 for Count-Based Exploration with Neural Density Models
Viaarxiv icon

The Cramer Distance as a Solution to Biased Wasserstein Gradients

Add code
Bookmark button
Alert button
May 30, 2017
Marc G. Bellemare, Ivo Danihelka, Will Dabney, Shakir Mohamed, Balaji Lakshminarayanan, Stephan Hoyer, Rémi Munos

Figure 1 for The Cramer Distance as a Solution to Biased Wasserstein Gradients
Figure 2 for The Cramer Distance as a Solution to Biased Wasserstein Gradients
Figure 3 for The Cramer Distance as a Solution to Biased Wasserstein Gradients
Figure 4 for The Cramer Distance as a Solution to Biased Wasserstein Gradients
Viaarxiv icon