Picture for Alexander Immer

Alexander Immer

Shaving Weights with Occam's Razor: Bayesian Sparsification for Neural Networks Using the Marginal Likelihood

Add code
Feb 25, 2024
Viaarxiv icon

Position Paper: Bayesian Deep Learning in the Age of Large-Scale AI

Add code
Feb 06, 2024
Figure 1 for Position Paper: Bayesian Deep Learning in the Age of Large-Scale AI
Figure 2 for Position Paper: Bayesian Deep Learning in the Age of Large-Scale AI
Viaarxiv icon

Uncertainty in Graph Contrastive Learning with Bayesian Neural Networks

Add code
Nov 30, 2023
Figure 1 for Uncertainty in Graph Contrastive Learning with Bayesian Neural Networks
Figure 2 for Uncertainty in Graph Contrastive Learning with Bayesian Neural Networks
Figure 3 for Uncertainty in Graph Contrastive Learning with Bayesian Neural Networks
Figure 4 for Uncertainty in Graph Contrastive Learning with Bayesian Neural Networks
Viaarxiv icon

Kronecker-Factored Approximate Curvature for Modern Neural Network Architectures

Add code
Nov 01, 2023
Figure 1 for Kronecker-Factored Approximate Curvature for Modern Neural Network Architectures
Figure 2 for Kronecker-Factored Approximate Curvature for Modern Neural Network Architectures
Figure 3 for Kronecker-Factored Approximate Curvature for Modern Neural Network Architectures
Figure 4 for Kronecker-Factored Approximate Curvature for Modern Neural Network Architectures
Viaarxiv icon

Learning Layer-wise Equivariances Automatically using Gradients

Add code
Oct 09, 2023
Figure 1 for Learning Layer-wise Equivariances Automatically using Gradients
Figure 2 for Learning Layer-wise Equivariances Automatically using Gradients
Figure 3 for Learning Layer-wise Equivariances Automatically using Gradients
Figure 4 for Learning Layer-wise Equivariances Automatically using Gradients
Viaarxiv icon

Towards Training Without Depth Limits: Batch Normalization Without Gradient Explosion

Add code
Oct 03, 2023
Figure 1 for Towards Training Without Depth Limits: Batch Normalization Without Gradient Explosion
Figure 2 for Towards Training Without Depth Limits: Batch Normalization Without Gradient Explosion
Figure 3 for Towards Training Without Depth Limits: Batch Normalization Without Gradient Explosion
Figure 4 for Towards Training Without Depth Limits: Batch Normalization Without Gradient Explosion
Viaarxiv icon

Hodge-Aware Contrastive Learning

Add code
Sep 14, 2023
Figure 1 for Hodge-Aware Contrastive Learning
Figure 2 for Hodge-Aware Contrastive Learning
Figure 3 for Hodge-Aware Contrastive Learning
Figure 4 for Hodge-Aware Contrastive Learning
Viaarxiv icon

Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels

Add code
Jun 06, 2023
Figure 1 for Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels
Figure 2 for Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels
Figure 3 for Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels
Figure 4 for Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels
Viaarxiv icon

Laplace-Approximated Neural Additive Models: Improving Interpretability with Bayesian Inference

Add code
May 26, 2023
Figure 1 for Laplace-Approximated Neural Additive Models: Improving Interpretability with Bayesian Inference
Figure 2 for Laplace-Approximated Neural Additive Models: Improving Interpretability with Bayesian Inference
Figure 3 for Laplace-Approximated Neural Additive Models: Improving Interpretability with Bayesian Inference
Figure 4 for Laplace-Approximated Neural Additive Models: Improving Interpretability with Bayesian Inference
Viaarxiv icon

Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization

Add code
Apr 17, 2023
Figure 1 for Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization
Figure 2 for Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization
Figure 3 for Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization
Figure 4 for Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization
Viaarxiv icon