Alert button
Picture for Johanni Brea

Johanni Brea

Alert button

Should Under-parameterized Student Networks Copy or Average Teacher Weights?

Add code
Bookmark button
Alert button
Nov 03, 2023
Berfin Şimşek, Amire Bendjeddou, Wulfram Gerstner, Johanni Brea

Viaarxiv icon

Expand-and-Cluster: Exact Parameter Recovery of Neural Networks

Add code
Bookmark button
Alert button
Apr 25, 2023
Flavio Martinelli, Berfin Simsek, Johanni Brea, Wulfram Gerstner

Figure 1 for Expand-and-Cluster: Exact Parameter Recovery of Neural Networks
Figure 2 for Expand-and-Cluster: Exact Parameter Recovery of Neural Networks
Figure 3 for Expand-and-Cluster: Exact Parameter Recovery of Neural Networks
Figure 4 for Expand-and-Cluster: Exact Parameter Recovery of Neural Networks
Viaarxiv icon

MLPGradientFlow: going with the flow of multilayer perceptrons (and finding minima fast and accurately)

Add code
Bookmark button
Alert button
Jan 25, 2023
Johanni Brea, Flavio Martinelli, Berfin Şimşek, Wulfram Gerstner

Figure 1 for MLPGradientFlow: going with the flow of multilayer perceptrons (and finding minima fast and accurately)
Figure 2 for MLPGradientFlow: going with the flow of multilayer perceptrons (and finding minima fast and accurately)
Figure 3 for MLPGradientFlow: going with the flow of multilayer perceptrons (and finding minima fast and accurately)
Figure 4 for MLPGradientFlow: going with the flow of multilayer perceptrons (and finding minima fast and accurately)
Viaarxiv icon

A taxonomy of surprise definitions

Add code
Bookmark button
Alert button
Sep 02, 2022
Alireza Modirshanechi, Johanni Brea, Wulfram Gerstner

Figure 1 for A taxonomy of surprise definitions
Figure 2 for A taxonomy of surprise definitions
Figure 3 for A taxonomy of surprise definitions
Figure 4 for A taxonomy of surprise definitions
Viaarxiv icon

Kernel Memory Networks: A Unifying Framework for Memory Modeling

Add code
Bookmark button
Alert button
Aug 19, 2022
Georgios Iatropoulos, Johanni Brea, Wulfram Gerstner

Figure 1 for Kernel Memory Networks: A Unifying Framework for Memory Modeling
Figure 2 for Kernel Memory Networks: A Unifying Framework for Memory Modeling
Figure 3 for Kernel Memory Networks: A Unifying Framework for Memory Modeling
Viaarxiv icon

Neural NID Rules

Add code
Bookmark button
Alert button
Feb 12, 2022
Luca Viano, Johanni Brea

Viaarxiv icon

Fitting summary statistics of neural data with a differentiable spiking network simulator

Add code
Bookmark button
Alert button
Jun 18, 2021
Guillaume Bellec, Shuqi Wang, Alireza Modirshanechi, Johanni Brea, Wulfram Gerstner

Figure 1 for Fitting summary statistics of neural data with a differentiable spiking network simulator
Figure 2 for Fitting summary statistics of neural data with a differentiable spiking network simulator
Figure 3 for Fitting summary statistics of neural data with a differentiable spiking network simulator
Figure 4 for Fitting summary statistics of neural data with a differentiable spiking network simulator
Viaarxiv icon

Geometry of the Loss Landscape in Overparameterized Neural Networks: Symmetries and Invariances

Add code
Bookmark button
Alert button
May 25, 2021
Berfin Şimşek, François Ged, Arthur Jacot, Francesco Spadaro, Clément Hongler, Wulfram Gerstner, Johanni Brea

Figure 1 for Geometry of the Loss Landscape in Overparameterized Neural Networks: Symmetries and Invariances
Figure 2 for Geometry of the Loss Landscape in Overparameterized Neural Networks: Symmetries and Invariances
Figure 3 for Geometry of the Loss Landscape in Overparameterized Neural Networks: Symmetries and Invariances
Figure 4 for Geometry of the Loss Landscape in Overparameterized Neural Networks: Symmetries and Invariances
Viaarxiv icon

An Approximate Bayesian Approach to Surprise-Based Learning

Add code
Bookmark button
Alert button
Jul 05, 2019
Vasiliki Liakoni, Alireza Modirshanechi, Wulfram Gerstner, Johanni Brea

Figure 1 for An Approximate Bayesian Approach to Surprise-Based Learning
Figure 2 for An Approximate Bayesian Approach to Surprise-Based Learning
Figure 3 for An Approximate Bayesian Approach to Surprise-Based Learning
Figure 4 for An Approximate Bayesian Approach to Surprise-Based Learning
Viaarxiv icon

Weight-space symmetry in deep networks gives rise to permutation saddles, connected by equal-loss valleys across the loss landscape

Add code
Bookmark button
Alert button
Jul 05, 2019
Johanni Brea, Berfin Simsek, Bernd Illing, Wulfram Gerstner

Figure 1 for Weight-space symmetry in deep networks gives rise to permutation saddles, connected by equal-loss valleys across the loss landscape
Figure 2 for Weight-space symmetry in deep networks gives rise to permutation saddles, connected by equal-loss valleys across the loss landscape
Figure 3 for Weight-space symmetry in deep networks gives rise to permutation saddles, connected by equal-loss valleys across the loss landscape
Figure 4 for Weight-space symmetry in deep networks gives rise to permutation saddles, connected by equal-loss valleys across the loss landscape
Viaarxiv icon