Alert button
Picture for Ard A. Louis

Ard A. Louis

Alert button

Do deep neural networks have an inbuilt Occam's razor?

Add code
Bookmark button
Alert button
Apr 13, 2023
Chris Mingard, Henry Rees, Guillermo Valle-Pérez, Ard A. Louis

Figure 1 for Do deep neural networks have an inbuilt Occam's razor?
Figure 2 for Do deep neural networks have an inbuilt Occam's razor?
Figure 3 for Do deep neural networks have an inbuilt Occam's razor?
Viaarxiv icon

Double-descent curves in neural networks: a new perspective using Gaussian processes

Add code
Bookmark button
Alert button
Feb 16, 2021
Ouns El Harzli, Guillermo Valle-Pérez, Ard A. Louis

Figure 1 for Double-descent curves in neural networks: a new perspective using Gaussian processes
Figure 2 for Double-descent curves in neural networks: a new perspective using Gaussian processes
Figure 3 for Double-descent curves in neural networks: a new perspective using Gaussian processes
Figure 4 for Double-descent curves in neural networks: a new perspective using Gaussian processes
Viaarxiv icon

Generalization bounds for deep learning

Add code
Bookmark button
Alert button
Dec 09, 2020
Guillermo Valle-Pérez, Ard A. Louis

Figure 1 for Generalization bounds for deep learning
Figure 2 for Generalization bounds for deep learning
Figure 3 for Generalization bounds for deep learning
Figure 4 for Generalization bounds for deep learning
Viaarxiv icon

Is SGD a Bayesian sampler? Well, almost

Add code
Bookmark button
Alert button
Jun 26, 2020
Chris Mingard, Guillermo Valle-Pérez, Joar Skalse, Ard A. Louis

Figure 1 for Is SGD a Bayesian sampler? Well, almost
Figure 2 for Is SGD a Bayesian sampler? Well, almost
Figure 3 for Is SGD a Bayesian sampler? Well, almost
Figure 4 for Is SGD a Bayesian sampler? Well, almost
Viaarxiv icon

Neural networks are a priori biased towards Boolean functions with low entropy

Add code
Bookmark button
Alert button
Sep 29, 2019
Chris Mingard, Joar Skalse, Guillermo Valle-Pérez, David Martínez-Rubio, Vladimir Mikulik, Ard A. Louis

Figure 1 for Neural networks are a priori biased towards Boolean functions with low entropy
Figure 2 for Neural networks are a priori biased towards Boolean functions with low entropy
Figure 3 for Neural networks are a priori biased towards Boolean functions with low entropy
Figure 4 for Neural networks are a priori biased towards Boolean functions with low entropy
Viaarxiv icon

Neural networks are $\textit{a priori}$ biased towards Boolean functions with low entropy

Add code
Bookmark button
Alert button
Sep 25, 2019
Chris Mingard, Joar Skalse, Guillermo Valle-Pérez, David Martínez-Rubio, Vladimir Mikulik, Ard A. Louis

Figure 1 for Neural networks are $\textit{a priori}$ biased towards Boolean functions with low entropy
Figure 2 for Neural networks are $\textit{a priori}$ biased towards Boolean functions with low entropy
Figure 3 for Neural networks are $\textit{a priori}$ biased towards Boolean functions with low entropy
Figure 4 for Neural networks are $\textit{a priori}$ biased towards Boolean functions with low entropy
Viaarxiv icon

Deep learning generalizes because the parameter-function map is biased towards simple functions

Add code
Bookmark button
Alert button
Sep 28, 2018
Guillermo Valle-Pérez, Chico Q. Camargo, Ard A. Louis

Figure 1 for Deep learning generalizes because the parameter-function map is biased towards simple functions
Figure 2 for Deep learning generalizes because the parameter-function map is biased towards simple functions
Figure 3 for Deep learning generalizes because the parameter-function map is biased towards simple functions
Figure 4 for Deep learning generalizes because the parameter-function map is biased towards simple functions
Viaarxiv icon