Picture for Razvan Pascanu

Razvan Pascanu

Google DeepMind

Top-KAST: Top-K Always Sparse Training

Add code
Jun 07, 2021
Figure 1 for Top-KAST: Top-K Always Sparse Training
Figure 2 for Top-KAST: Top-K Always Sparse Training
Figure 3 for Top-KAST: Top-K Always Sparse Training
Figure 4 for Top-KAST: Top-K Always Sparse Training
Viaarxiv icon

A study on the plasticity of neural networks

Add code
May 31, 2021
Figure 1 for A study on the plasticity of neural networks
Figure 2 for A study on the plasticity of neural networks
Figure 3 for A study on the plasticity of neural networks
Figure 4 for A study on the plasticity of neural networks
Viaarxiv icon

Drawing Multiple Augmentation Samples Per Image During Training Efficiently Decreases Test Error

Add code
May 27, 2021
Figure 1 for Drawing Multiple Augmentation Samples Per Image During Training Efficiently Decreases Test Error
Figure 2 for Drawing Multiple Augmentation Samples Per Image During Training Efficiently Decreases Test Error
Figure 3 for Drawing Multiple Augmentation Samples Per Image During Training Efficiently Decreases Test Error
Figure 4 for Drawing Multiple Augmentation Samples Per Image During Training Efficiently Decreases Test Error
Viaarxiv icon

Continual World: A Robotic Benchmark For Continual Reinforcement Learning

Add code
May 23, 2021
Figure 1 for Continual World: A Robotic Benchmark For Continual Reinforcement Learning
Figure 2 for Continual World: A Robotic Benchmark For Continual Reinforcement Learning
Figure 3 for Continual World: A Robotic Benchmark For Continual Reinforcement Learning
Figure 4 for Continual World: A Robotic Benchmark For Continual Reinforcement Learning
Viaarxiv icon

Spectral Normalisation for Deep Reinforcement Learning: an Optimisation Perspective

Add code
May 11, 2021
Figure 1 for Spectral Normalisation for Deep Reinforcement Learning: an Optimisation Perspective
Figure 2 for Spectral Normalisation for Deep Reinforcement Learning: an Optimisation Perspective
Figure 3 for Spectral Normalisation for Deep Reinforcement Learning: an Optimisation Perspective
Figure 4 for Spectral Normalisation for Deep Reinforcement Learning: an Optimisation Perspective
Viaarxiv icon

Regularized Behavior Value Estimation

Add code
Mar 17, 2021
Figure 1 for Regularized Behavior Value Estimation
Figure 2 for Regularized Behavior Value Estimation
Figure 3 for Regularized Behavior Value Estimation
Figure 4 for Regularized Behavior Value Estimation
Viaarxiv icon

Behavior Priors for Efficient Reinforcement Learning

Add code
Oct 27, 2020
Figure 1 for Behavior Priors for Efficient Reinforcement Learning
Figure 2 for Behavior Priors for Efficient Reinforcement Learning
Figure 3 for Behavior Priors for Efficient Reinforcement Learning
Figure 4 for Behavior Priors for Efficient Reinforcement Learning
Viaarxiv icon

BYOL works even without batch statistics

Add code
Oct 20, 2020
Figure 1 for BYOL works even without batch statistics
Figure 2 for BYOL works even without batch statistics
Viaarxiv icon

Linear Mode Connectivity in Multitask and Continual Learning

Add code
Oct 09, 2020
Figure 1 for Linear Mode Connectivity in Multitask and Continual Learning
Figure 2 for Linear Mode Connectivity in Multitask and Continual Learning
Figure 3 for Linear Mode Connectivity in Multitask and Continual Learning
Figure 4 for Linear Mode Connectivity in Multitask and Continual Learning
Viaarxiv icon

Temporal Difference Uncertainties as a Signal for Exploration

Add code
Oct 05, 2020
Figure 1 for Temporal Difference Uncertainties as a Signal for Exploration
Figure 2 for Temporal Difference Uncertainties as a Signal for Exploration
Figure 3 for Temporal Difference Uncertainties as a Signal for Exploration
Figure 4 for Temporal Difference Uncertainties as a Signal for Exploration
Viaarxiv icon