Alert button
Picture for Johannes von Oswald

Johannes von Oswald

Alert button

Linear Transformers are Versatile In-Context Learners

Feb 21, 2024
Max Vladymyrov, Johannes von Oswald, Mark Sandler, Rong Ge

Viaarxiv icon

Discovering modular solutions that generalize compositionally

Dec 22, 2023
Simon Schug, Seijin Kobayashi, Yassir Akram, Maciej Wołczyk, Alexandra Proca, Johannes von Oswald, Razvan Pascanu, João Sacramento, Angelika Steger

Viaarxiv icon

Uncovering mesa-optimization algorithms in Transformers

Sep 11, 2023
Johannes von Oswald, Eyvind Niklasson, Maximilian Schlegel, Seijin Kobayashi, Nicolas Zucchet, Nino Scherrer, Nolan Miller, Mark Sandler, Blaise Agüera y Arcas, Max Vladymyrov, Razvan Pascanu, João Sacramento

Viaarxiv icon

Gated recurrent neural networks discover attention

Sep 04, 2023
Nicolas Zucchet, Seijin Kobayashi, Yassir Akram, Johannes von Oswald, Maxime Larcher, Angelika Steger, João Sacramento

Figure 1 for Gated recurrent neural networks discover attention
Figure 2 for Gated recurrent neural networks discover attention
Figure 3 for Gated recurrent neural networks discover attention
Figure 4 for Gated recurrent neural networks discover attention
Viaarxiv icon

Transformers learn in-context by gradient descent

Dec 15, 2022
Johannes von Oswald, Eyvind Niklasson, Ettore Randazzo, João Sacramento, Alexander Mordvintsev, Andrey Zhmoginov, Max Vladymyrov

Figure 1 for Transformers learn in-context by gradient descent
Figure 2 for Transformers learn in-context by gradient descent
Figure 3 for Transformers learn in-context by gradient descent
Figure 4 for Transformers learn in-context by gradient descent
Viaarxiv icon

Disentangling the Predictive Variance of Deep Ensembles through the Neural Tangent Kernel

Oct 18, 2022
Seijin Kobayashi, Pau Vilimelis Aceituno, Johannes von Oswald

Figure 1 for Disentangling the Predictive Variance of Deep Ensembles through the Neural Tangent Kernel
Figure 2 for Disentangling the Predictive Variance of Deep Ensembles through the Neural Tangent Kernel
Figure 3 for Disentangling the Predictive Variance of Deep Ensembles through the Neural Tangent Kernel
Figure 4 for Disentangling the Predictive Variance of Deep Ensembles through the Neural Tangent Kernel
Viaarxiv icon

Random initialisations performing above chance and how to find them

Sep 15, 2022
Frederik Benzing, Simon Schug, Robert Meier, Johannes von Oswald, Yassir Akram, Nicolas Zucchet, Laurence Aitchison, Angelika Steger

Figure 1 for Random initialisations performing above chance and how to find them
Figure 2 for Random initialisations performing above chance and how to find them
Figure 3 for Random initialisations performing above chance and how to find them
Figure 4 for Random initialisations performing above chance and how to find them
Viaarxiv icon

The least-control principle for learning at equilibrium

Jul 04, 2022
Alexander Meulemans, Nicolas Zucchet, Seijin Kobayashi, Johannes von Oswald, João Sacramento

Figure 1 for The least-control principle for learning at equilibrium
Figure 2 for The least-control principle for learning at equilibrium
Figure 3 for The least-control principle for learning at equilibrium
Figure 4 for The least-control principle for learning at equilibrium
Viaarxiv icon

Learning where to learn: Gradient sparsity in meta and continual learning

Oct 27, 2021
Johannes von Oswald, Dominic Zhao, Seijin Kobayashi, Simon Schug, Massimo Caccia, Nicolas Zucchet, João Sacramento

Figure 1 for Learning where to learn: Gradient sparsity in meta and continual learning
Figure 2 for Learning where to learn: Gradient sparsity in meta and continual learning
Figure 3 for Learning where to learn: Gradient sparsity in meta and continual learning
Figure 4 for Learning where to learn: Gradient sparsity in meta and continual learning
Viaarxiv icon

A contrastive rule for meta-learning

Apr 19, 2021
Nicolas Zucchet, Simon Schug, Johannes von Oswald, Dominic Zhao, João Sacramento

Figure 1 for A contrastive rule for meta-learning
Figure 2 for A contrastive rule for meta-learning
Figure 3 for A contrastive rule for meta-learning
Figure 4 for A contrastive rule for meta-learning
Viaarxiv icon