Alert button
Picture for David Silver

David Silver

Alert button

Continuous control with deep reinforcement learning

Add code
Bookmark button
Alert button
Feb 29, 2016
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra

Figure 1 for Continuous control with deep reinforcement learning
Figure 2 for Continuous control with deep reinforcement learning
Figure 3 for Continuous control with deep reinforcement learning
Figure 4 for Continuous control with deep reinforcement learning
Viaarxiv icon

Prioritized Experience Replay

Add code
Bookmark button
Alert button
Feb 25, 2016
Tom Schaul, John Quan, Ioannis Antonoglou, David Silver

Figure 1 for Prioritized Experience Replay
Figure 2 for Prioritized Experience Replay
Figure 3 for Prioritized Experience Replay
Figure 4 for Prioritized Experience Replay
Viaarxiv icon

Memory-based control with recurrent neural networks

Add code
Bookmark button
Alert button
Dec 14, 2015
Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver

Figure 1 for Memory-based control with recurrent neural networks
Figure 2 for Memory-based control with recurrent neural networks
Figure 3 for Memory-based control with recurrent neural networks
Viaarxiv icon

Deep Reinforcement Learning with Double Q-learning

Add code
Bookmark button
Alert button
Dec 08, 2015
Hado van Hasselt, Arthur Guez, David Silver

Figure 1 for Deep Reinforcement Learning with Double Q-learning
Figure 2 for Deep Reinforcement Learning with Double Q-learning
Figure 3 for Deep Reinforcement Learning with Double Q-learning
Figure 4 for Deep Reinforcement Learning with Double Q-learning
Viaarxiv icon

Learning Continuous Control Policies by Stochastic Value Gradients

Add code
Bookmark button
Alert button
Oct 30, 2015
Nicolas Heess, Greg Wayne, David Silver, Timothy Lillicrap, Yuval Tassa, Tom Erez

Figure 1 for Learning Continuous Control Policies by Stochastic Value Gradients
Figure 2 for Learning Continuous Control Policies by Stochastic Value Gradients
Figure 3 for Learning Continuous Control Policies by Stochastic Value Gradients
Figure 4 for Learning Continuous Control Policies by Stochastic Value Gradients
Viaarxiv icon

Massively Parallel Methods for Deep Reinforcement Learning

Add code
Bookmark button
Alert button
Jul 16, 2015
Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria, Vedavyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, Shane Legg, Volodymyr Mnih, Koray Kavukcuoglu, David Silver

Figure 1 for Massively Parallel Methods for Deep Reinforcement Learning
Figure 2 for Massively Parallel Methods for Deep Reinforcement Learning
Figure 3 for Massively Parallel Methods for Deep Reinforcement Learning
Figure 4 for Massively Parallel Methods for Deep Reinforcement Learning
Viaarxiv icon

Move Evaluation in Go Using Deep Convolutional Neural Networks

Add code
Bookmark button
Alert button
Apr 10, 2015
Chris J. Maddison, Aja Huang, Ilya Sutskever, David Silver

Figure 1 for Move Evaluation in Go Using Deep Convolutional Neural Networks
Figure 2 for Move Evaluation in Go Using Deep Convolutional Neural Networks
Figure 3 for Move Evaluation in Go Using Deep Convolutional Neural Networks
Viaarxiv icon

Value Iteration with Options and State Aggregation

Add code
Bookmark button
Alert button
Jan 16, 2015
Kamil Ciosek, David Silver

Figure 1 for Value Iteration with Options and State Aggregation
Figure 2 for Value Iteration with Options and State Aggregation
Figure 3 for Value Iteration with Options and State Aggregation
Viaarxiv icon

Unit Tests for Stochastic Optimization

Add code
Bookmark button
Alert button
Feb 25, 2014
Tom Schaul, Ioannis Antonoglou, David Silver

Figure 1 for Unit Tests for Stochastic Optimization
Figure 2 for Unit Tests for Stochastic Optimization
Figure 3 for Unit Tests for Stochastic Optimization
Figure 4 for Unit Tests for Stochastic Optimization
Viaarxiv icon

Better Optimism By Bayes: Adaptive Planning with Rich Models

Add code
Bookmark button
Alert button
Feb 09, 2014
Arthur Guez, David Silver, Peter Dayan

Figure 1 for Better Optimism By Bayes: Adaptive Planning with Rich Models
Figure 2 for Better Optimism By Bayes: Adaptive Planning with Rich Models
Figure 3 for Better Optimism By Bayes: Adaptive Planning with Rich Models
Figure 4 for Better Optimism By Bayes: Adaptive Planning with Rich Models
Viaarxiv icon