Picture for Daniel A. Roberts

Daniel A. Roberts

Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data

Add code
Apr 01, 2024
Viaarxiv icon

The Unreasonable Ineffectiveness of the Deeper Layers

Add code
Mar 26, 2024
Figure 1 for The Unreasonable Ineffectiveness of the Deeper Layers
Figure 2 for The Unreasonable Ineffectiveness of the Deeper Layers
Figure 3 for The Unreasonable Ineffectiveness of the Deeper Layers
Figure 4 for The Unreasonable Ineffectiveness of the Deeper Layers
Viaarxiv icon

Feature Learning and Generalization in Deep Networks with Orthogonal Weights

Add code
Oct 11, 2023
Figure 1 for Feature Learning and Generalization in Deep Networks with Orthogonal Weights
Figure 2 for Feature Learning and Generalization in Deep Networks with Orthogonal Weights
Figure 3 for Feature Learning and Generalization in Deep Networks with Orthogonal Weights
Figure 4 for Feature Learning and Generalization in Deep Networks with Orthogonal Weights
Viaarxiv icon

A Solvable Model of Neural Scaling Laws

Add code
Oct 30, 2022
Figure 1 for A Solvable Model of Neural Scaling Laws
Figure 2 for A Solvable Model of Neural Scaling Laws
Figure 3 for A Solvable Model of Neural Scaling Laws
Figure 4 for A Solvable Model of Neural Scaling Laws
Viaarxiv icon

The Principles of Deep Learning Theory

Add code
Jun 18, 2021
Viaarxiv icon

SGD Implicitly Regularizes Generalization Error

Add code
Apr 10, 2021
Viaarxiv icon

Why is AI hard and Physics simple?

Add code
Mar 31, 2021
Figure 1 for Why is AI hard and Physics simple?
Figure 2 for Why is AI hard and Physics simple?
Viaarxiv icon

Topological Obstructions to Autoencoding

Add code
Feb 16, 2021
Figure 1 for Topological Obstructions to Autoencoding
Figure 2 for Topological Obstructions to Autoencoding
Figure 3 for Topological Obstructions to Autoencoding
Figure 4 for Topological Obstructions to Autoencoding
Viaarxiv icon

Robust Learning with Jacobian Regularization

Add code
Aug 07, 2019
Figure 1 for Robust Learning with Jacobian Regularization
Figure 2 for Robust Learning with Jacobian Regularization
Figure 3 for Robust Learning with Jacobian Regularization
Figure 4 for Robust Learning with Jacobian Regularization
Viaarxiv icon

Gradient Descent Happens in a Tiny Subspace

Add code
Dec 12, 2018
Figure 1 for Gradient Descent Happens in a Tiny Subspace
Figure 2 for Gradient Descent Happens in a Tiny Subspace
Figure 3 for Gradient Descent Happens in a Tiny Subspace
Figure 4 for Gradient Descent Happens in a Tiny Subspace
Viaarxiv icon