Alert button
Picture for Stefano Spigler

Stefano Spigler

Alert button

How isotropic kernels learn simple invariants

Jun 29, 2020
Jonas Paccolat, Stefano Spigler, Matthieu Wyart

Figure 1 for How isotropic kernels learn simple invariants
Figure 2 for How isotropic kernels learn simple invariants
Figure 3 for How isotropic kernels learn simple invariants
Figure 4 for How isotropic kernels learn simple invariants
Viaarxiv icon

Disentangling feature and lazy learning in deep neural networks: an empirical study

Jun 19, 2019
Mario Geiger, Stefano Spigler, Arthur Jacot, Matthieu Wyart

Figure 1 for Disentangling feature and lazy learning in deep neural networks: an empirical study
Figure 2 for Disentangling feature and lazy learning in deep neural networks: an empirical study
Figure 3 for Disentangling feature and lazy learning in deep neural networks: an empirical study
Figure 4 for Disentangling feature and lazy learning in deep neural networks: an empirical study
Viaarxiv icon

Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm

Jun 06, 2019
Stefano Spigler, Mario Geiger, Matthieu Wyart

Figure 1 for Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm
Figure 2 for Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm
Figure 3 for Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm
Figure 4 for Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm
Viaarxiv icon

Scaling description of generalization with number of parameters in deep learning

Jan 18, 2019
Mario Geiger, Arthur Jacot, Stefano Spigler, Franck Gabriel, Levent Sagun, Stéphane d'Ascoli, Giulio Biroli, Clément Hongler, Matthieu Wyart

Figure 1 for Scaling description of generalization with number of parameters in deep learning
Figure 2 for Scaling description of generalization with number of parameters in deep learning
Figure 3 for Scaling description of generalization with number of parameters in deep learning
Figure 4 for Scaling description of generalization with number of parameters in deep learning
Viaarxiv icon

A jamming transition from under- to over-parametrization affects loss landscape and generalization

Oct 22, 2018
Stefano Spigler, Mario Geiger, Stéphane d'Ascoli, Levent Sagun, Giulio Biroli, Matthieu Wyart

Figure 1 for A jamming transition from under- to over-parametrization affects loss landscape and generalization
Figure 2 for A jamming transition from under- to over-parametrization affects loss landscape and generalization
Figure 3 for A jamming transition from under- to over-parametrization affects loss landscape and generalization
Figure 4 for A jamming transition from under- to over-parametrization affects loss landscape and generalization
Viaarxiv icon

The jamming transition as a paradigm to understand the loss landscape of deep neural networks

Oct 03, 2018
Mario Geiger, Stefano Spigler, Stéphane d'Ascoli, Levent Sagun, Marco Baity-Jesi, Giulio Biroli, Matthieu Wyart

Figure 1 for The jamming transition as a paradigm to understand the loss landscape of deep neural networks
Figure 2 for The jamming transition as a paradigm to understand the loss landscape of deep neural networks
Figure 3 for The jamming transition as a paradigm to understand the loss landscape of deep neural networks
Figure 4 for The jamming transition as a paradigm to understand the loss landscape of deep neural networks
Viaarxiv icon