Picture for Marc G. Bellemare

Marc G. Bellemare

An Introduction to Deep Reinforcement Learning

Add code
Dec 03, 2018
Figure 1 for An Introduction to Deep Reinforcement Learning
Figure 2 for An Introduction to Deep Reinforcement Learning
Figure 3 for An Introduction to Deep Reinforcement Learning
Figure 4 for An Introduction to Deep Reinforcement Learning
Viaarxiv icon

Approximate Exploration through State Abstraction

Add code
Aug 29, 2018
Figure 1 for Approximate Exploration through State Abstraction
Figure 2 for Approximate Exploration through State Abstraction
Figure 3 for Approximate Exploration through State Abstraction
Figure 4 for Approximate Exploration through State Abstraction
Viaarxiv icon

Count-Based Exploration with the Successor Representation

Add code
Aug 14, 2018
Figure 1 for Count-Based Exploration with the Successor Representation
Figure 2 for Count-Based Exploration with the Successor Representation
Figure 3 for Count-Based Exploration with the Successor Representation
Figure 4 for Count-Based Exploration with the Successor Representation
Viaarxiv icon

An Analysis of Categorical Distributional Reinforcement Learning

Add code
Feb 22, 2018
Figure 1 for An Analysis of Categorical Distributional Reinforcement Learning
Figure 2 for An Analysis of Categorical Distributional Reinforcement Learning
Viaarxiv icon

Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents

Add code
Dec 01, 2017
Figure 1 for Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents
Figure 2 for Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents
Figure 3 for Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents
Figure 4 for Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents
Viaarxiv icon

Distributional Reinforcement Learning with Quantile Regression

Add code
Oct 27, 2017
Figure 1 for Distributional Reinforcement Learning with Quantile Regression
Figure 2 for Distributional Reinforcement Learning with Quantile Regression
Figure 3 for Distributional Reinforcement Learning with Quantile Regression
Figure 4 for Distributional Reinforcement Learning with Quantile Regression
Viaarxiv icon

A Distributional Perspective on Reinforcement Learning

Add code
Jul 21, 2017
Figure 1 for A Distributional Perspective on Reinforcement Learning
Figure 2 for A Distributional Perspective on Reinforcement Learning
Figure 3 for A Distributional Perspective on Reinforcement Learning
Figure 4 for A Distributional Perspective on Reinforcement Learning
Viaarxiv icon

A Laplacian Framework for Option Discovery in Reinforcement Learning

Add code
Jun 16, 2017
Figure 1 for A Laplacian Framework for Option Discovery in Reinforcement Learning
Figure 2 for A Laplacian Framework for Option Discovery in Reinforcement Learning
Figure 3 for A Laplacian Framework for Option Discovery in Reinforcement Learning
Figure 4 for A Laplacian Framework for Option Discovery in Reinforcement Learning
Viaarxiv icon

Count-Based Exploration with Neural Density Models

Add code
Jun 14, 2017
Figure 1 for Count-Based Exploration with Neural Density Models
Figure 2 for Count-Based Exploration with Neural Density Models
Figure 3 for Count-Based Exploration with Neural Density Models
Figure 4 for Count-Based Exploration with Neural Density Models
Viaarxiv icon

The Cramer Distance as a Solution to Biased Wasserstein Gradients

Add code
May 30, 2017
Figure 1 for The Cramer Distance as a Solution to Biased Wasserstein Gradients
Figure 2 for The Cramer Distance as a Solution to Biased Wasserstein Gradients
Figure 3 for The Cramer Distance as a Solution to Biased Wasserstein Gradients
Figure 4 for The Cramer Distance as a Solution to Biased Wasserstein Gradients
Viaarxiv icon