Alert button
Picture for Alexander Immer

Alexander Immer

Alert button

Shaving Weights with Occam's Razor: Bayesian Sparsification for Neural Networks Using the Marginal Likelihood

Add code
Bookmark button
Alert button
Feb 25, 2024
Rayen Dhahri, Alexander Immer, Betrand Charpentier, Stephan Günnemann, Vincent Fortuin

Viaarxiv icon

Position Paper: Bayesian Deep Learning in the Age of Large-Scale AI

Add code
Bookmark button
Alert button
Feb 06, 2024
Theodore Papamarkou, Maria Skoularidou, Konstantina Palla, Laurence Aitchison, Julyan Arbel, David Dunson, Maurizio Filippone, Vincent Fortuin, Philipp Hennig, Jose Miguel Hernandez Lobato, Aliaksandr Hubin, Alexander Immer, Theofanis Karaletsos, Mohammad Emtiyaz Khan, Agustinus Kristiadi, Yingzhen Li, Stephan Mandt, Christopher Nemeth, Michael A. Osborne, Tim G. J. Rudner, David Rügamer, Yee Whye Teh, Max Welling, Andrew Gordon Wilson, Ruqi Zhang

Viaarxiv icon

Uncertainty in Graph Contrastive Learning with Bayesian Neural Networks

Add code
Bookmark button
Alert button
Nov 30, 2023
Alexander Möllers, Alexander Immer, Elvin Isufi, Vincent Fortuin

Viaarxiv icon

Kronecker-Factored Approximate Curvature for Modern Neural Network Architectures

Add code
Bookmark button
Alert button
Nov 01, 2023
Runa Eschenhagen, Alexander Immer, Richard E. Turner, Frank Schneider, Philipp Hennig

Figure 1 for Kronecker-Factored Approximate Curvature for Modern Neural Network Architectures
Figure 2 for Kronecker-Factored Approximate Curvature for Modern Neural Network Architectures
Figure 3 for Kronecker-Factored Approximate Curvature for Modern Neural Network Architectures
Figure 4 for Kronecker-Factored Approximate Curvature for Modern Neural Network Architectures
Viaarxiv icon

Learning Layer-wise Equivariances Automatically using Gradients

Add code
Bookmark button
Alert button
Oct 09, 2023
Tycho F. A. van der Ouderaa, Alexander Immer, Mark van der Wilk

Figure 1 for Learning Layer-wise Equivariances Automatically using Gradients
Figure 2 for Learning Layer-wise Equivariances Automatically using Gradients
Figure 3 for Learning Layer-wise Equivariances Automatically using Gradients
Figure 4 for Learning Layer-wise Equivariances Automatically using Gradients
Viaarxiv icon

Towards Training Without Depth Limits: Batch Normalization Without Gradient Explosion

Add code
Bookmark button
Alert button
Oct 03, 2023
Alexandru Meterez, Amir Joudaki, Francesco Orabona, Alexander Immer, Gunnar Rätsch, Hadi Daneshmand

Figure 1 for Towards Training Without Depth Limits: Batch Normalization Without Gradient Explosion
Figure 2 for Towards Training Without Depth Limits: Batch Normalization Without Gradient Explosion
Figure 3 for Towards Training Without Depth Limits: Batch Normalization Without Gradient Explosion
Figure 4 for Towards Training Without Depth Limits: Batch Normalization Without Gradient Explosion
Viaarxiv icon

Hodge-Aware Contrastive Learning

Add code
Bookmark button
Alert button
Sep 14, 2023
Alexander Möllers, Alexander Immer, Vincent Fortuin, Elvin Isufi

Figure 1 for Hodge-Aware Contrastive Learning
Figure 2 for Hodge-Aware Contrastive Learning
Figure 3 for Hodge-Aware Contrastive Learning
Figure 4 for Hodge-Aware Contrastive Learning
Viaarxiv icon

Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels

Add code
Bookmark button
Alert button
Jun 06, 2023
Alexander Immer, Tycho F. A. van der Ouderaa, Mark van der Wilk, Gunnar Rätsch, Bernhard Schölkopf

Figure 1 for Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels
Figure 2 for Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels
Figure 3 for Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels
Figure 4 for Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels
Viaarxiv icon

Laplace-Approximated Neural Additive Models: Improving Interpretability with Bayesian Inference

Add code
Bookmark button
Alert button
May 26, 2023
Kouroche Bouchiat, Alexander Immer, Hugo Yèche, Gunnar Rätsch, Vincent Fortuin

Figure 1 for Laplace-Approximated Neural Additive Models: Improving Interpretability with Bayesian Inference
Figure 2 for Laplace-Approximated Neural Additive Models: Improving Interpretability with Bayesian Inference
Figure 3 for Laplace-Approximated Neural Additive Models: Improving Interpretability with Bayesian Inference
Figure 4 for Laplace-Approximated Neural Additive Models: Improving Interpretability with Bayesian Inference
Viaarxiv icon