Alert button
Picture for Razvan Pascanu

Razvan Pascanu

Alert button

Architecture Matters in Continual Learning

Add code
Bookmark button
Alert button
Feb 01, 2022
Seyed Iman Mirzadeh, Arslan Chaudhry, Dong Yin, Timothy Nguyen, Razvan Pascanu, Dilan Gorur, Mehrdad Farajtabar

Figure 1 for Architecture Matters in Continual Learning
Figure 2 for Architecture Matters in Continual Learning
Figure 3 for Architecture Matters in Continual Learning
Figure 4 for Architecture Matters in Continual Learning
Viaarxiv icon

Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet?

Add code
Bookmark button
Alert button
Jan 13, 2022
Nenad Tomasev, Ioana Bica, Brian McWilliams, Lars Buesing, Razvan Pascanu, Charles Blundell, Jovana Mitrovic

Figure 1 for Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet?
Figure 2 for Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet?
Figure 3 for Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet?
Figure 4 for Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet?
Viaarxiv icon

Wide Neural Networks Forget Less Catastrophically

Add code
Bookmark button
Alert button
Oct 21, 2021
Seyed Iman Mirzadeh, Arslan Chaudhry, Huiyi Hu, Razvan Pascanu, Dilan Gorur, Mehrdad Farajtabar

Figure 1 for Wide Neural Networks Forget Less Catastrophically
Figure 2 for Wide Neural Networks Forget Less Catastrophically
Figure 3 for Wide Neural Networks Forget Less Catastrophically
Figure 4 for Wide Neural Networks Forget Less Catastrophically
Viaarxiv icon

Powerpropagation: A sparsity inducing weight reparameterisation

Add code
Bookmark button
Alert button
Oct 06, 2021
Jonathan Schwarz, Siddhant M. Jayakumar, Razvan Pascanu, Peter E. Latham, Yee Whye Teh

Figure 1 for Powerpropagation: A sparsity inducing weight reparameterisation
Figure 2 for Powerpropagation: A sparsity inducing weight reparameterisation
Figure 3 for Powerpropagation: A sparsity inducing weight reparameterisation
Figure 4 for Powerpropagation: A sparsity inducing weight reparameterisation
Viaarxiv icon

On the Role of Optimization in Double Descent: A Least Squares Study

Add code
Bookmark button
Alert button
Jul 27, 2021
Ilja Kuzborskij, Csaba Szepesvári, Omar Rivasplata, Amal Rannen-Triki, Razvan Pascanu

Figure 1 for On the Role of Optimization in Double Descent: A Least Squares Study
Figure 2 for On the Role of Optimization in Double Descent: A Least Squares Study
Figure 3 for On the Role of Optimization in Double Descent: A Least Squares Study
Figure 4 for On the Role of Optimization in Double Descent: A Least Squares Study
Viaarxiv icon

Reasoning-Modulated Representations

Add code
Bookmark button
Alert button
Jul 19, 2021
Petar Veličković, Matko Bošnjak, Thomas Kipf, Alexander Lerchner, Raia Hadsell, Razvan Pascanu, Charles Blundell

Figure 1 for Reasoning-Modulated Representations
Figure 2 for Reasoning-Modulated Representations
Figure 3 for Reasoning-Modulated Representations
Figure 4 for Reasoning-Modulated Representations
Viaarxiv icon

Task-agnostic Continual Learning with Hybrid Probabilistic Models

Add code
Bookmark button
Alert button
Jun 24, 2021
Polina Kirichenko, Mehrdad Farajtabar, Dushyant Rao, Balaji Lakshminarayanan, Nir Levine, Ang Li, Huiyi Hu, Andrew Gordon Wilson, Razvan Pascanu

Figure 1 for Task-agnostic Continual Learning with Hybrid Probabilistic Models
Figure 2 for Task-agnostic Continual Learning with Hybrid Probabilistic Models
Figure 3 for Task-agnostic Continual Learning with Hybrid Probabilistic Models
Figure 4 for Task-agnostic Continual Learning with Hybrid Probabilistic Models
Viaarxiv icon

Predicting Unreliable Predictions by Shattering a Neural Network

Add code
Bookmark button
Alert button
Jun 15, 2021
Xu Ji, Razvan Pascanu, Devon Hjelm, Andrea Vedaldi, Balaji Lakshminarayanan, Yoshua Bengio

Figure 1 for Predicting Unreliable Predictions by Shattering a Neural Network
Figure 2 for Predicting Unreliable Predictions by Shattering a Neural Network
Figure 3 for Predicting Unreliable Predictions by Shattering a Neural Network
Figure 4 for Predicting Unreliable Predictions by Shattering a Neural Network
Viaarxiv icon

Top-KAST: Top-K Always Sparse Training

Add code
Bookmark button
Alert button
Jun 07, 2021
Siddhant M. Jayakumar, Razvan Pascanu, Jack W. Rae, Simon Osindero, Erich Elsen

Figure 1 for Top-KAST: Top-K Always Sparse Training
Figure 2 for Top-KAST: Top-K Always Sparse Training
Figure 3 for Top-KAST: Top-K Always Sparse Training
Figure 4 for Top-KAST: Top-K Always Sparse Training
Viaarxiv icon