Picture for Ard A. Louis

Ard A. Louis

Sufficient Conditions for Stability of Minimum-Norm Interpolating Deep ReLU Networks

Add code
Feb 14, 2026
Viaarxiv icon

Exploiting the equivalence between quantum neural networks and perceptrons

Add code
Jul 05, 2024
Figure 1 for Exploiting the equivalence between quantum neural networks and perceptrons
Figure 2 for Exploiting the equivalence between quantum neural networks and perceptrons
Figure 3 for Exploiting the equivalence between quantum neural networks and perceptrons
Figure 4 for Exploiting the equivalence between quantum neural networks and perceptrons
Viaarxiv icon

Do deep neural networks have an inbuilt Occam's razor?

Add code
Apr 13, 2023
Viaarxiv icon

Double-descent curves in neural networks: a new perspective using Gaussian processes

Add code
Feb 16, 2021
Figure 1 for Double-descent curves in neural networks: a new perspective using Gaussian processes
Figure 2 for Double-descent curves in neural networks: a new perspective using Gaussian processes
Figure 3 for Double-descent curves in neural networks: a new perspective using Gaussian processes
Figure 4 for Double-descent curves in neural networks: a new perspective using Gaussian processes
Viaarxiv icon

Generalization bounds for deep learning

Add code
Dec 09, 2020
Figure 1 for Generalization bounds for deep learning
Figure 2 for Generalization bounds for deep learning
Figure 3 for Generalization bounds for deep learning
Figure 4 for Generalization bounds for deep learning
Viaarxiv icon

Is SGD a Bayesian sampler? Well, almost

Add code
Jun 26, 2020
Figure 1 for Is SGD a Bayesian sampler? Well, almost
Figure 2 for Is SGD a Bayesian sampler? Well, almost
Figure 3 for Is SGD a Bayesian sampler? Well, almost
Figure 4 for Is SGD a Bayesian sampler? Well, almost
Viaarxiv icon

Neural networks are a priori biased towards Boolean functions with low entropy

Add code
Sep 29, 2019
Figure 1 for Neural networks are a priori biased towards Boolean functions with low entropy
Figure 2 for Neural networks are a priori biased towards Boolean functions with low entropy
Figure 3 for Neural networks are a priori biased towards Boolean functions with low entropy
Figure 4 for Neural networks are a priori biased towards Boolean functions with low entropy
Viaarxiv icon

Deep learning generalizes because the parameter-function map is biased towards simple functions

Add code
Sep 28, 2018
Figure 1 for Deep learning generalizes because the parameter-function map is biased towards simple functions
Figure 2 for Deep learning generalizes because the parameter-function map is biased towards simple functions
Figure 3 for Deep learning generalizes because the parameter-function map is biased towards simple functions
Figure 4 for Deep learning generalizes because the parameter-function map is biased towards simple functions
Viaarxiv icon